Distorted View: Four Reasons Why Your Customer Measurement Program May Be Failing You

Customer insight The Verde Group

I’ve previously written about the four biggest mistakes companies make when implementing a customer measurement program, and how companies can achieve much better customer insights by avoiding these common errors. And it’s critical that they do — our Verde Group research shows that from 2004-2014, the average company had 18.7 percent of its revenue at risk due to poor customer experiences.

The truth be told, there are more than four — and the next group is equally egregious. Here are four more ways companies are sabotaging their customer measurement initiatives.

Conducting ‘how much do you love us’ surveys

The customer satisfaction survey — it’s hard to think of another marketing research practice that is so widespread yet so damaging. And not just for one reason:

The numbers are skewed. There is a bias (particularly in North America) called Extreme Response Mode. Presented with a question that has an answer on a numerical scale, respondents are likely to vote at the extreme high or low ends of that scale. So if asked ‘from one to ten, how happy are you with [product/service]?’ many customers will respond with 8s and 9s, as opposed to 6s and 7s.

Because we’re obsessed with success, companies tend to combine the two top ratings, then make conclusions like, “80 percent of our customers are satisfied. We must be doing great.” What they don’t understand is that satisfaction is not a predictor of loyalty, it’s a pre-condition.

No actionable insights. Worse, these distorted numbers don’t tell us much. On its own, a score provides no context as to why customers feel the way they do. It doesn’t tell you what they like about you, or more importantly, what’s making them unhappy.

Satisfaction doesn’t predict loyalty. Satisfied customers aren’t necessarily loyal — they may change their behaviors, switch their buying choices, or even stop buying from you altogether. According to a Bain & Company study, 60-80 percent of customers who describe themselves as satisfied do not go back to do more business with the company that initially satisfied them.

So what does this type of research deliver? Sadly, not too much, other than a few internal pats on the back. Despite the widely-held belief that these type of measurement instruments link to financial performance, it’s simply not true. And that’s why many are left explaining to their executive team why market share is shrinking and sales are tanking, even as customer satisfaction scores are through the roof.

Still, think satisfaction surveys are the best way to take your customer’s temperature? Think again. The Verde Group research determined that 60-80 percent of defecting customers categorize themselves as ‘satisfied’ on surveys conducted immediately before their departure. Don’t let satisfaction surveys leave you out in the cold.

Not asking the right questions about future customer behavior

While companies often ask their customers ‘will you buy from us again,’ they shouldn’t put a lot of stock in the positive answers they receive. Longstanding research has established the ‘Intention Behavior Discrepancy,’ which shows that positively stated intentions don’t accurately predict corresponding positive behavior.

However, the same isn’t true for negative answers. When customers state a negative intention, the likelihood of it matching future negative behavior is much higher. Also, our Verde Group research shows that problem frequency is directly tied to ‘intent not to purchase’ — customers who don’t experience problems are twice as loyal as those who do.

The challenge is that often companies don’t hear about these customer issues. Our research determined that over 67 percent of customers who have a problem won’t tell the company about it. So companies need to ask the right questions, and the right way, to ferret these problems out.

Then, of course, the negative responses must be investigated — organizations can learn so much more by drilling down into them. If customers say ‘they won’t buy next year’ or ‘they won’t recommend you,’ then companies need to dig deeper with follow-up questions and preferably conversations to understand what’s behind those negative sentiments.

Measuring unactionable items

Many survey questions that look good on paper can be dead ends — they don’t provide actionable insights. Here’s a classic example of a dead-end question: “how satisfied are you with the responsiveness of your sales representative?” If 70 percent of customers surveyed said they are not satisfied, what could you do with that information?

It’s almost impossible to define ‘responsiveness’ in this context, as it’s not really an observable behavior. Imagine going to your front-line reps and telling them they need to be ‘more responsive.’ Now instead, imagine telling them “when a customer waits more than three minutes, they don’t want to buy from us again.” That’s the power of asking more observable, behavioral types of questions that lead to genuine customer insights.

Here’s another example of an unactionable question: “How satisfied are you with product quality?” This query tells you very little. Perhaps the customer doesn’t like the packaging, or how the product operates, or even how it feels. Without deeper, observation-based questions, you may never know.

Then there’s the ‘double-barrel’ question. For example, “how satisfied are you with the speed and accuracy of our technical response center?” Questions that ask about multiple attributes in a single statement are also not actionable — without further questions, you’ll never really be sure what the customer is thinking.

Too much measuring, not enough analysis

Companies are committing more time and resources than ever to the collection of customer information. A common mistake many organizations make, though, is to send out mass surveys, then analyze the results as if they have a homogeneous customer set, with every answer having the same meaning and implications for each customer.

This ‘big bucket’ approach to analysis barely scratches the surface. There are many other variables to consider — gender and age for example — that come into play.

Imagine you conduct a survey and your scores are down 30 percent month over month, yet nothing appears to have changed in your business. But by digging a little deeper, you may learn that you had an unusually large percentage of millennial women respond to your latest survey — a demographic that doesn’t embrace your product.

While more in-depth inspection and segmenting your survey results will help, the real insight comes when you go beyond a single survey and leverage multiple sources of customer feedback. Look at your call center data, your social media, and interview your sales reps. Engage in a real dialogue with customers — not just in focus groups, but in the stores.

Collecting, dashboarding, and analyzing these disparate data streams isn’t easy. But the results can offer unprecedented clarity around what’s truly happening in your business.

Painting a clear picture of your customer

And that’s the key takeaway here. You should consider your customer research initiative — even if it’s world-class — as a single, small view into what’s actually going on with your business. One study or survey is just not enough. You need to collect and integrate multiple sources of customer insight, then weave them together to create a greater understanding of your customers, your business, and what you need to prioritize and change.

The stakes are undoubtedly high. Verde Group research suggests that just a 5 percent increase in customer retention can account for a 20 percent increase in productivity — and a 50-100 percent increase in profit margins.

Paula Courtney is Chief Executive Officer of The Verde Group and a lecturer at The Wharton School.








Chief Executive Officer of The Verde Group.
Paula Courtney