I hope you’ve all had a good summer and if you were in the UK, enjoyed the plethora of events on offer. With the Jubilee, Olympics and Paralympics there’s been plenty of activity for everyone to enjoy – and even a (brief) sojourn to the Euros for football fans! I’m pleased to say that I was one of the lucky ‘ones’ (though I think that should perhaps be lucky millions given the number of tickets sold) who did manage to get Olympic tickets – spending a day in the Olympic Park with a bit of diving to watch mid-afternoon. It was a fantastic experience – and had anyone surveyed me about it, I’d definitely have given it the best mark possible on whatever scale I was presented with. But, I almost certainly won’t go again.
Thinking about this, it occurred to me that this is typical of the problems we face researching the customer experience. (The analysis of my behaviour, rather than Locog not having splashed out on a little post-event satisfaction research). How often do we find we’ve interviewed a customer who is highly satisfied, gives a rave review and then doesn’t repurchase? It’s easy to dismiss this kind of behaviour as ‘noise’ and make comments like ‘there’s always a few people like this who you can’t account for’, but over a longer term it causes us major problems in making the voice of the customer actually heard.
What do I mean by that? Well, here are a few ways in which satisfied customers who won’t/don’t repurchase risk reducing the impact of the work we do.
Firstly, they make it look like we measured the wrong thing. If you look at any of the papers creating new metrics, one of their biggest selling points is that they have a better correlation with future behaviour than ‘satisfaction’ (which is always considered poor in these papers). This fear also pervades our clients – how often do we get briefs asking us for our view on the ‘best’ metric, and find ourselves having to explain why the link between that metric and repurchase isn’t exactly perfect.
Secondly, they make the research findings look weak to ‘non-researchers’. At the end of the day, if our research is to be taken out into the businesses of the clients we work for, it has to look sensible to the end users. These are people who aren’t experienced researchers, probably weren’t overly consulted in the research process and may well be sceptical. A glaring mismatch between the level of satisfaction and a real world repurchase rate is bound to cause the end users to be concerned over what they’ve been given to work with and undermine their confidence in the research.
And finally, it causes chaos in drivers analysis! Virtually every research brief for customer experience that I’ve seen has asked for the research to identify the key drivers of satisfaction/the relationship. As an industry we tend to try and look at this through statistical analysis of the detailed aspects of service with the overall behavioural measures – whether it is correlation or some form of regression. Yet if we’re honest, how often does drivers analysis really give you a clear answer? Usually you can tell the ‘related’ from the ‘unrelated’, but the majority of the aspects you measure will look to have the same level of influence (0.35 really isn’t any different to 0.4). And, the analysis isn’t perfect anyway! Measures like R2 make it very obvious that our driver models don’t explain as much of the relationship as we’d like – a situation made much worse by those satisfied customers who don’t ever buy again.
Solving the conundrum
So how do we fix this issue? The lazy and cynical can easily point to the fact that this isn’t a new issue. It’s existed since market research was ‘born’ and no-one has ever found a solution. That’s very true, and I doubt I can offer suggestions which will make the issue go away completely. But, it doesn’t mean we shouldn’t try and ensure we approach customer experience research in a way that takes this into account. At the end of the day, everything we can do to improve the links between our research and customer behaviour will help us in deriving better insights from our research, and our clients to make the right decisions.
So what can we do to improve the efficiency of our research and ensure the results we report have a strong link to actual behaviour? Here are a few of my ideas:
Recognise that the relationship isn’t linear
One of the biggest mistakes we make is to assume that a satisfied customer today will be one who repurchases tomorrow. Yes, satisfaction today will have an influence, but it is not the only influence. I discussed this a few months ago, but we need to understand what factors actually drive purchase – and the role previous experience plays in that. This might sound like a bit of a cop-out given what I’ve said above, but is actually the first step in understanding what we’re researching.
What do I mean? Well, at one level we need to include an understanding of how experience links to brand perception and the role of brand perception in the path to purchase. This enables us to frame our research and our conclusions in a way that much more realistically reflects the market.
The second level for looking at this is taking into account the wider market factors and their role in the purchase decision. We need to know how much consumers buy the cheapest, nearest or most convenient. Are they about to change life-stage and no longer have the same purchase patterns as in the past? Was their purchase or your brand this time a ‘one-off’ driven by a need to spend wisely, or a luxury treat they won’t repeat?
Adding in this context may even mean that we have to write off a particular group of highly satisfied customers – simply because the purchase was so specific or market factors so strong that they won’t buy the product or service again.
What’s important is that we as researchers segment our clients’ customers correctly and look at the context around their purchase decisions sufficiently. We need to understand the factors driving the purchase and give our clients that picture, along with the quality of experience.
Stop talking about ‘customers’
I suspect a few of you are looking puzzled at this point – surely if we’re looking at customer experience we should be researching customers? Yes and no. Yes – customers are the people we’re interested in, but no – ‘customers’ are too diverse to be lumped together under one banner most of the time.
If we really want to do good quality research that can be applied, we have to acknowledge that the customer base will contain a huge range of individuals with different needs and different motivations. In fact, we have to go beyond acknowledging this, and look at what we do to take this into account. The classic example I find of this is in B2B research. Almost every B2B satisfaction project has to cope with the issue that some customers have practical needs but may have little impact on the purchase decision, whereas others are more hands off – but more influential in the decision. Grouping these two types of customers together in one analysis will give very odd results.
Practically, we need to build in as much as we can about the specific customers we interview – what is their actual need, how do they interact with the company, what do they want from the relationship, what power do they have over the purchase decision. When we do our analysis we need to split customers into much smaller chunks – grouped by these factors, not by which P&L account they sit under, or how much they spend.
If we can analyse the data we have in this way, rather than using broad brushes, we’ll get a much clearer view of precisely what matters to the different groups of customers – and where the pain points are. Not only will this make our findings more obvious, but it’ll be much more actionable for our clients.
Design our questionnaires to get at what customers really want
There’s a historic tendency with customer experience research to use long questionnaires that try and cover every aspect of a relationship – down to the most specific statement that can be developed and asked. Respondents are asked to rate almost everything, and we find ourselves at debrief time looking at the lists of attributes trying to work out what lies beneath the poor ratings on some of these. Some light can often be shed by any verbatim comments we’ve been given – researchers finding themselves scouring these for a nugget about what should be done.
But we can be smarter than this. Do we really need all those statements? Can we ask a more general statement about the broad area – and then get the customer to tell us in their own words why they feel that way, or what can be done better? I’ve tried this on a few projects and it’s worked wonders – no longer do respondents spend 30 minutes answering 50 statements on a five point scale and forcing us to provide bland mean scores. Instead, they spend 20 minutes rating 10-15 statements and giving us rich comments on the 5-6 of these that they would like to see improved and telling us the reasons behind their ratings.
Analysing this data is much easier as you can see exactly what customers want – and why they feel the way they do – and the client can see much more precisely what they need to do. Additionally, if you frame it right, you can get to the exact reasons they will/won’t repurchase – giving you the understanding you need to make sense of the results.
Fundamentally, we need to deliver customer experience research that is efficient and effective. It’s not an easy win, but hopefully this month’s blog has given a few practical tips and things to think about that will help you in that quest. We won’t be able to deliver success overnight, but little improvements here and there will make us all more successful, and in the process raise the profile of customer experience research.
Now having said that the relationship between satisfaction and repurchase isn’t linear, I’m off to look at flights to Rio in four years time…
The views expressed in this blog posting are the author’s own, and do not necessarily reflect the views of TNS, nor of its associated companies.