Tom Ewing

Part 1 of this article talked about how you can improve your chances of predicting real behaviour by “hacking” the context in which you’re asking about it – meaning taking the context apart and identifying things you can change. It then talked about how BrainJuicer set out to apply this thinking to pack testing, which tends to reward far more information-heavy packs in tests which the underperform in stores.

Hacks And Packs
To hack pack test participants’ contexts, we simply imposed a time limit on their choices – first in a hall test, then in an online one. The reasoning was simple. We knew that people only took a few seconds to choose in a real store, and we also knew that under time pressure, people made more emotional, impulsive decisions. It was exactly the same as in the research BrainJuicer’s Peter Harrison had done on games – changing contexts to put people in a “hot state”. This time we wanted the decisions to be more realistic.

And it turned out they were. Imposing the time limit changed the direction of results, meaning respondents preferred the visual, emotionally appealing competitor pack to our client’s information-rich pack (see Figure 2). This was an opposite result from the standard shelf test, but one that was more in line with our client’s real data. We got the same result offline and online – this test was successfully tapping people’s System 1 decision making. By hacking the psychological context of a survey with a time limit, we had made it more realistic.

Figure 2: “Brand A” is the emotionally appealing competitor pack, “Brand B” is the info-heavy client pack. Brand A wins in the context-hacked “System 1” test… and in the market too.

Figure 2: “Brand A” is the emotionally appealing competitor pack, “Brand B” is the info-heavy client pack. Brand A wins in the context-hacked “System 1” test… and in the market too.

Since this first project we’ve done further validation work and run hundreds of “System 1 Pack Test”s for clients. We are learning more all the time about how packaging works emotionally – the need for packs to be immediately recognisable, for instance, but also congruent with their category. Rebrands and pack changes are a problematic area for brands – it’s often seen as an easy way to refresh brands, but from a System 1 perspective it’s extremely high risk (just ask Tropicana). Familiarity and rapid recognition are vital – ironically, smaller or more niche brands have more opportunity to experiment and more to gain, but seem less likely to.

Context Hacking For Brand Value
Changing the context of a survey made pack testing get closer to reality. What else could it do?

One area where System 1 thinking is absolutely vital is in promotions. Promotions eat up an increasing amount of marketing budgets – as much as $6 of every $10 spent on marketing goes on cuts. But the value of such promotions is hard to predict. There is a distinct possibility that much of this spend is wasted – consumers are being offered deals that do not, in fact, maximise value for brands or appeal for customers. Brands are either giving too much away or pitching their promotions in a sub-optimal way.

How do we know this? Because of a basic misunderstanding of how value works. People assume that value for money is a largely rational concept, and that when people see promotions, they assess them based on how much money might be saved.

This is not, however, the case. Value is an emotional attribute, even when dealing with price cuts and special offers. To illustrate this, imagine two sets of offers on the same staple, non-perishable product. One says “50% extra free”. The other says “50% off”.

The second one is, if you pause to figure it out, a lot better value than the first. But ask people to choose and they will go for each option in roughly equal proportions. In one experiment, we offered people deals which were genuinely almost identical in value – “50% extra free” versus “33% off”. Three-quarters of people went for the first.

So how promotions are presented really matters. And as with the packaging, it’s a question of System 1 making our decisions. Working out value takes people time and engages their System 2 decision making system. And in a retail environment, people simply don’t do this. As Daniel Kahneman himself put it, “’How do I feel about something?’ is a proxy for a much more difficult question: “What do I think about something?””

This was the mistake made by retailer JC Penney in the US in 2012: a new CEO scrapped their complex promotional architecture of regular deals and offers, in favour of “everyday low prices”, which actually offered more savings overall. But the move was a disaster – sales plunged and the CEO ultimately lost his job. What he had misunderstood was the emotional nature of promotions – JC Penney customers liked the coupons and deals because they felt more like bargains than flat pricing did.

At BrainJuicer, we felt there was room for a way to test promotions which took this into account. We started with an experiment. We offered people a range of promotions and asked them about the perceived value for money. Then we compared it both to the actual value for money of the offer, and to how they felt about the deal emotionally. Perceived value was far more closely related to how the offer felt than how good it actually was, because value is a rapid, System 1 led decision.

So just as with packaging, the way to test promotions is to shift the context of the decision making towards System 1. In our promotions experiment, we used the same hack as in the pack tests – a time limit on decision-making, to discourage making considered choices. It means our participants will be responding to the most immediately or emotionally appealing offers.

Stopping The Race To The Bottom
What we’ve discovered using this has been fascinating. In a UK test of different toilet paper promotions, we discovered that claimed endorsement by a trusted source, and hinting at scarcity – imposing a maximum limit of purchases – both boosted volume sales, but that offering a free Koala toy with the toilet paper proved even more popular (see Figure 3)

Figure 3: Tests of endorsement, scarcity, and free gifts on a well-known toilet paper brand – all of them under time-limit conditions.

Figure 3: Tests of endorsement, scarcity, and free gifts on a well-known toilet paper brand – all of them under time-limit conditions.

Importantly, all of these offered value improvements to the retailer over more traditional cuts. In an Indian toothpaste test, we found that participants viewed discounts with active suspicion – perhaps distrusting the quality of goods not sold at full price.

With pack testing there are certain basic guidelines it’s best for brands to follow – get rid of informational clutter, for instance. Promotions testing is a trickier field, because while every customer uses heuristics to make decisions, the exact heuristics used will vary by category and culture. There’s no one-size-fits-all solution, which makes promotional testing vital.

But the rewards are potentially enormous. Making customers just as happy with a lower depth of cut can save brands a great deal of money, and stop the promotional “race to the bottom”. One global brand we ran promotion tests with implemented one of our recommended promotions online, in a split test with a standard promotional strategy. The recommended promotion led to 18% higher sales for the same cost as the standard strategy. (For obvious reasons, we can’t reveal the specific brand or the specific promotion in this article.)

Surveys In A System 1 World
In a “post-Kahneman world”, researchers have to adjust their thinking on two fronts. First, they have to make sure they’re accounting for fast, intuitive decision making in their predictive models. Second, they have to make sure their surveys aren’t forcing people into slower and more considered decisions.

The approach I’ve called “context hacking” is a way of solving this second problem – making tweaks to the way surveys are collected in order to put respondents in a frame of mind closer to their real decision-making one. Time limits are only one way to do this – newer research technologies, particularly mobile, let us understand context far more directly, allowing us to incorporate temperature, time of day and other contextual factors in our research.

As an industry, though, we tend to get over-excited about technology. The work I’ve talked about here shows that you can get improve accuracy and get results that have a meaningful impact on your clients’ bottom line, just by making tiny tweaks in the existing frame of online survey research. As long, that is, as your underlying model of decision making is accurate. Without the right analytical frame, new data sources and collection methods are pointless. This is the real lesson of the post-Kahneman world: get the psychology right before you get the technology right.

Tom Ewing is Digital Culture Officer at Brainjuicer