The Rule of Replication

By Andrew Jeavons

There is no doubt that there has been a resurgence of late of academic theories of psychology being applied to market research. The most prominent example is the growing use in market research of Daniel Kahnemans’ “System 1 and System 2” cognitive processing theory. He won a Nobel Prize for it after all, although they had to give it to him for Economics as there is no Nobel Prize for Psychology. As I am sure everyone knows System 1 thinking is described as fast and intuitive thinking and System 2 as more deliberate and logical thought processes. It is one of the most influence psychological theories ever produced.

Unfortunately it comes at a time when academic psychology is having a slight crisis. A series of papers have pointed out that many psychological experiments published in journals can’t be reproduced. A paper in the journal “Nature” last year, link here, pointed out that over half of the psychological studies the authors tested failed attempts at reproduction. They tried to reproduce 98 results from 98 papers from 3 journals. All in all there were 100 reproduction attempts, only 39 were successful. That leaves 61% of the attempts as failures to reproduce. Other commentators have suggested that the real rate of reproduction failure could exceed 80%, truly alarming.

The replication of any scientific study is regarded as the hallmark of a sound experiment and theory. Yet there is a well know bias against publication of papers describing failures to replicate. Failures to replicate just aren’t very interesting and so, it is said, they are not published. The studies in the “Nature” article were all performed on the same type of population as the original work. But the truth is that a lot of psychological theories are based on the results from undergraduate psychology students, hardly representative of the real world. One of my tutors at university had a theory of an aspect of reading that worked well on a undergraduate student population. He found out, to his chagrin, that when he performed the experiments to validate his theory on the general population it failed abysmally.

50% percent of the name of the industry we work in is “research”. What are the rules of replication in market research? Does the industry embrace the idea of “failure to replicate”? I haven’t seen many studies (there are a few though) detracting from the belief in Net Promoter Score (NPS) as an excellent measure of customer satisfaction. In business everyone wants to appear positive, it is a vital approach to selling services and products. Yet this understandable bias could be inhibiting important information from being discussed. No one wants to appear to be negative, everything is always great all the time, right?

Perhaps the ability to be anonymous is the key to publishing failures to replicate in research techniques. No one can see you being negative in public, but the results can be published.

For market research there is another consideration, that of commercial interest. If you know that something doesn’t work, why tell your competitors? The corollary is not true though, it benefits any company to show how successful it is, no matter what the technique. And so we have the bias against reporting negative results. If we are in a research industry this needs to be addressed somehow.

It’s important for research on research to continue and thrive and part of that is allowing failures to replicate of market research techniques to be published. This is the difficult part though, no one wants to be negative. It’s bad for sales, but arguably good for the industry as a whole. A very prominent philosopher of science, Karl Popper, published a book called “Conjectures and Refutations”. The market research industry needs to do more refuting to move forward.

Failure is as much a part of progressing any discipline as success. Tim Hartford’s book “Adapt” is a great book to read on how failure is vital to success. Success depends on failure.

It’s a two way street.

Andrew Jeavons, Mass Cognition