I wouldn’t say I have the healthiest diet in the world – way too much chocolate, pizza, alcohol and other unhealthy but delightfully tasty stuff – but I do like my fruit. Apples, oranges, bananas, pears, grapes, I eat them all, and like to get my 5 A Day from fruit alone!
However, my fruit eating has recently hit a bit of a snag, and I fear market research might be partially to blame. Before you assume I just spend too much time doing research to eat fruit, that’s not the case (well it is, but I’ve learnt to snack and type simultaneously)! The biggest problem is that I seem unable to buy bananas that I can actually eat. My local supermarket sells bananas, in fact it has a huge shelf covered in them, but they’re green and inedible for at least a week! I’ve tried to plan ahead and buy next week’s bananas a week in advance, but too often they jump straight to brown and bruised!
Now I’m sure a huge part of this is a result of the supermarket’s approach to stock management – trying to reduce wastage from fruit going off before it’s sold. However, I suspect customer research has played a part as well. Typically, customers dislike buying food with a short shelf life – they like to know they will get chance to eat it before it wraps itself in the kind of furry coat that made Alexander Fleming famous. Customers are more than happy to tell researchers this in retailer satisfaction surveys, and how they want fruit, veg and meat to be fresh. So how do supermarkets respond? Well, they sell you fruit that is indeed so fresh it’s not yet ripe, and that lasts at least a week in your fruit bowl before you can eat it. And I’m stuck having to buy green bananas.
Now I’m sure many of you are wondering whether the remainder of this blog will be me ranting about green bananas and how that’s ruined my satisfaction with my local supermarket. Never fear, that’s not this month’s topic! Instead, the whole green bananas saga got me wondering about how this supermarket is using its customer research and what questions it is asking of it. Following that line of thought led me to this month’s topic – the fundamental questions behind customer satisfaction/experience research.
The key questions we ask and answer
Organisations grow through their customers in a number of ways. Some expand their customer base – they aim to increase revenue and profit by ensuring more customers buy their products and services. Some try to increase the amount of money that they get from each customer – either through growing their share of wallet, or growing the customers’ spend (e.g. upselling). Others grow by simply reducing the costs of servicing customers – making more profit per customer as a result. Regardless of which of these approaches an organisation intends to use, or even if it is trying to do more than one, underpinning this will be its business/customer strategy.
That exact strategy will differ from organisation to organisation, but will (usually) aim to give the organisation a distinct position in the market. That position may vary from budget to premium and anywhere in between, but will (should) balance the brand image and strength, cost of product/ service delivery, and the customer’s general habits and behaviours in that market. In formulating and executing that strategy, they will at times need customer research – this is where we come in!
However, the customer research that we do will differ wildly according to their specific strategy, how they are implementing it, and the questions that come up as part of the process. In turn, this will lead to vastly different goals at the heart of the research project.
We already acknowledge this in much of what we do. There are huge differences between research briefs looking for relationship research versus those looking for transactional/real-time research, with customer journey research being a whole different ballgame altogether. And if you look at these three broad types of customer research, you can see that in reality they have three different aims – and the questions behind them therefore differ:
Relationship research – looks at the strength of the customer relationship, at what drives the relationship, and prioritises actions to improve that relationship. Major questions for this type of research are around how satisfied customers are, and what should be done to increase their satisfaction.
Transactional/real-time research – enables organisations to manage individual customers by managing their experiences; intervening when customers are unhappy, detecting common reasons for dissatisfaction, and ensuring there are no stores/teams/call centres that are underperforming. Key questions in this type of research tend to be around how well customers are being serviced, who the unhappy ones are, where the organisation is not performing, and what needs to be done better.
Customer journey research – in many ways, this is quite different. It looks at how people navigate their way through markets and find themselves as customers; or having become customers, how they interact with organisations on a day to day basis. Having established the different paths customers follow, it often then includes measurement of how well the organisation services them at each stage.
The fact that these areas have become separate types of customer research with favoured approaches is testament to our ability to see that not all research is the same.
The relationship research conundrum
Before we pat ourselves on the back for spotting these differences and handling them appropriately, I want to point out that we aren’t quite as good as we think. Within the area of relationship research, we seem to have something of a blind spot. In fact it appears to be a double layered blind spot.
Firstly, many customer satisfaction research briefs look the same at first glance – ‘we have customers, we want to know how satisfied they are with us, and we want to know what we need to do to ensure they become more satisfied.’ The truth though, is that if you dig beneath the surface you quickly find some very different contexts – which have a major impact on the way the research is designed, executed and analysed.
Different organisations will prioritise different customer behaviours according to the needs of their strategy – these behaviours could be purchase, but could also be advocacy, or simple inertia. Similarly, the idea of a ‘good’ customer experience will differ according to the competitive context and the level of service the organisation intends to deliver. What this means is that differing clients need the research to serve different purposes and to answer different questions. Even if two projects appear the same at the overall level they may differ wildly at the more granular one.
Secondly, too much of the industry has become obsessed with continual improvement. What do I mean? Well, do a quick bit of Googling (in another tab of course) and you will find armies of ‘customer experience’ experts talking about how they help organisations deliver better customer experiences. Books have been written, published and devoured by a hungry mass, case studies abound talk about how customers have been made happier, and whole conferences are devoted to the topic. ‘Experts’ pride themselves on how much they have shifted scores up and wear their ‘successes’ as badges of honour.
To be brutally honest, I’m not sure I’d give many of these ‘experts’ the time of day. They have missed the point. Not all organisations want to deliver better service. Some want to just deliver better profit. Before we start identifying weaknesses in service and prioritising actions that cost millions to drive customer satisfaction upwards, we need to remember to go back to the organisation’s strategy, see what they are trying to achieve with their customers, and ask whether improving customer satisfaction fits that aim.
I hope by now you’re mostly accepting I have a point rather than shaking your heads and snorting in derision – especially if you work in an open plan office and are now getting funny looks from colleagues – and aren’t using words like ‘preposterous’, ‘ridiculous’ and maybe even ‘nincompoop’. If you are, please read the previous paragraphs again and then think about it.
I hope some of you are feeling a little smug right now – safe in the knowledge that you are aware of these issues and aren’t culpable of either. For those of you who aren’t feeling smug, this is why it matters (and let’s be honest, it’s not rocket science). Depending on the question you are answering…
…you do different analyses
…you need different data
As an example, if a client wants to know how good its service is and how to optimise its processes, you need to measure how satisfied its customers are, identify which elements are (and aren’t) important, and assess how well the processes supporting these elements work.
Alternatively, if a client’s key aim is to build loyalty, you need to know what drives loyalty, how loyal customers are, and how potentially disloyal they are. Answering that latter point will require some idea of competitor threats and an understanding of the factors that underpin customer choice such as category interest, brand reputation or attitudes to risk.
Or in another example, if the purpose of the research is to identify how to win market share, measuring the loyalty of competitors’ customers could be far more important – identifying which can be targeted for acquisition activities, and what will motivate them to change.
At the end of the day, this goes back to a single issue at the heart of everything we do in customer experience – the need to understand the fundamental question we are answering for our clients. Before we start getting excited about what method, what metrics, and what funky dashboard/reporting solution we’ll be wheeling out, we need to make sure we know exactly how the research is going to be used and how it fits our client’s strategy and approach. Once we have that, then we can start to think about what information we need and how we’ll measure it.
If as researchers we cannot identify that fundamental question, we need to stop and start again. If not we will simply fail to deliver, wasting our time and our client’s money, and damaging not only our personal reputations but those of those of whole industry.
In reality there is no ‘one-size fits all’ approach. Not on metrics, not on approach, and not on analysis. The first stage of any project has to be to sit down with clients and explore exactly what the research is supposed to help the client to do.
With that information we can be invaluable partners. Without it, we’re about as much use as a green banana.
The views expressed in this blog posting are the author’s own, and do not necessarily reflect the views of TNS, nor of its associated companies.