In part four of our series on memory recall we look at how we can fix memory recall bias in research.
Annie Pettit gives us the top 7 takeouts from this year’s MRIA conference in Montreal.
By Andrew Jeavons
There is no doubt that there has been a resurgence of late of academic theories of psychology being applied to market research. The most prominent example is the growing use in market research of Daniel Kahnemans’ “System 1 and System 2” cognitive processing theory. He won a Nobel Prize for it after all, although they had to give it to him for Economics as there is no Nobel Prize for Psychology. As I am sure everyone knows System 1 thinking is described as fast and intuitive thinking and System 2 as more deliberate and logical thought processes. It is one of the most influence psychological theories ever produced.
Unfortunately it comes at a time when academic psychology is having a slight crisis. A series of papers have pointed out that many psychological experiments published in journals can’t be reproduced. A paper in the journal “Nature” last year, link here, pointed out that over half of the psychological studies the authors tested failed attempts at reproduction. They tried to reproduce 98 results from 98 papers from 3 journals. All in all there were 100 reproduction attempts, only 39 were successful. That leaves 61% of the attempts as failures to reproduce. Other commentators have suggested that the real rate of reproduction failure could exceed 80%, truly alarming.
The replication of any scientific study is regarded as the hallmark of a sound experiment and theory. Yet there is a well know bias against publication of papers describing failures to replicate. Failures to replicate just aren’t very interesting and so, it is said, they are not published. The studies in the “Nature” article were all performed on the same type of population as the original work. But the truth is that a lot of psychological theories are based on the results from undergraduate psychology students, hardly representative of the real world. One of my tutors at university had a theory of an aspect of reading that worked well on a undergraduate student population. He found out, to his chagrin, that when he performed the experiments to validate his theory on the general population it failed abysmally.
50% percent of the name of the industry we work in is “research”. What are the rules of replication in market research? Does the industry embrace the idea of “failure to replicate”? I haven’t seen many studies (there are a few though) detracting from the belief in Net Promoter Score (NPS) as an excellent measure of customer satisfaction. In business everyone wants to appear positive, it is a vital approach to selling services and products. Yet this understandable bias could be inhibiting important information from being discussed. No one wants to appear to be negative, everything is always great all the time, right?
Perhaps the ability to be anonymous is the key to publishing failures to replicate in research techniques. No one can see you being negative in public, but the results can be published.
For market research there is another consideration, that of commercial interest. If you know that something doesn’t work, why tell your competitors? The corollary is not true though, it benefits any company to show how successful it is, no matter what the technique. And so we have the bias against reporting negative results. If we are in a research industry this needs to be addressed somehow.
It’s important for research on research to continue and thrive and part of that is allowing failures to replicate of market research techniques to be published. This is the difficult part though, no one wants to be negative. It’s bad for sales, but arguably good for the industry as a whole. A very prominent philosopher of science, Karl Popper, published a book called “Conjectures and Refutations”. The market research industry needs to do more refuting to move forward.
Failure is as much a part of progressing any discipline as success. Tim Hartford’s book “Adapt” is a great book to read on how failure is vital to success. Success depends on failure.
It’s a two way street.
Andrew Jeavons, Mass Cognition
By Finn Raben
Firstly, a BIG thank you to Judith Passingham CEO, Ipsos Interactive Services (Global), Ipsos, UK, Eric Meerkamper President of RIWI, Canada and last but by no means least, Knut Asrud CEO of Norstat, Norway – who joined me to try and define what the next generation of research agency would look like! My thanks too, to all of you who were on the webinar, and your questions…some of which have hopefully been captured below.
I found this a fascinating discussion, and in many ways, highlighted a lot more questions than we had time to discuss. That said, I did come away from our discussion with four themes that I think are worth bearing in mind, as well as a fifth point that is consistent with the discussion we had a month ago on the next generation of researcher…..
- Reports about the death of data collection are wildly exaggerated.
One of the more interesting themes we hear quite regularly these days, is that data collection is “dead”, the theory being that because everyone can produce information, data collection is no longer necessary.
I think this discussion proved that this proposition is pretty groundless, as no matter what data you access, curate or analyse, the data will still have to have been “collected” at some point, and in that collection process, there are “good” ways of doing it, and “not so good” ways of doing it…the key is being able to understand what methodology (and thus what data) is “fit for purpose” for the project at hand.
The more interesting point was that everyone was of the opinion that data collection is no longer a linear process, and it now MUST be a device agnostic process – particularly as we see technology advance at an ever-increasing rate.
- Rigour more important than ever
Following on from that point was again, the consensus view that “more” data doesn’t automatically mean “better” data, and that “big” data doesn’t mean “complete” data.
Indeed, in this era of data proliferation, “rigour” is actually more important than ever before, as we do need to be able to determine “provenance”, as well as “quality”, so that we use the right data to support our project and not just all data.
Some time ago, ESOMAR started to communicate the concept of “smart data” as opposed to “big data” – i.e. determining that information which is pertinent to the question in hand, rather than just the entire data set. In this process, the researcher’s skill in being able to determine provenance, quality and relevance are second to none – and will remain a key feature of the research agency of the future.
- Proprietary intelligence is essential
On the basis of having the skill to determine the quality, provenance and relevance of information, the panel were also agreed that the research agency is best placed to compete on data knowledge & intelligence….but equally acknowledged that not all do so.
The view was expressed that if research agencies are just seen as service providers, then our perceived value will decline further….we do need to include some form of proprietary intelligence into our work, to distinguish ourselves from just a commodity information provider.
This does NOT necessarily mean “black box” technology – as we have seen that a lack of transparency does not sit well with clients or users – but rather suggests that we need to include a greater element of opinion, or commentary, which acknowledges our breadth and depth of experience in assessing and applying insights from data.
Some years ago, prior to the acquisition by WPP, TNS had a series of corporate exhortations, one of which was (and I paraphrase), Be Brave – express an opinion….I think this is more true now than ever before.
- The market WILL punish mediocrity.
The threat of commodity – and thus mediocrity – is lethal.
If you cannot discuss provenance and relevance, if you cannot express an opinion based on the evidence, and if you cannot substantiate either a question or an answer, then we become a commodity service… to be selected simply on price, and to suffer from continuing “cost-efficiency” pressures.
- We must shout louder about what we do…
As we discussed in our last webinar, we do not do enough to communicate the quality of our work, or the value of what we bring. Yes, we make mistakes – but so does every company.
To quote from the film “Rocky” : “it is not about how many times you can get knocked down, rather it is about how you get up each time”.
There were lots of other themes that we touched upon, including D.I.Y., data science, the disintermediation of elements of the research process, etc etc….I can’t cover them all here, but rest assured we will get to those, in the coming sessions!! In the meantime, I look forward to your thoughts on these themes..
Thanks again to everyone who participated in this discussion…I can’t wait for the next one, and what it will put forward!!
A reminder for your diary:
The future of market research webinar series- The Client vision
Finn Raben is Director General at ESOMAR.