Findings of the Annual Technology Survey
By Tim Macer
The accompanying infographic portrays the profound effect that technology has had on the day-to-day business of market research during the past decade. Though online research was well beyond its infancy in 2006, and had already replaced CATI as the dominant mode, nevertheless CATI was still the second pillar of fieldwork. Beyond CATI, paper occupied third place. Together, these three accounted for 88% of quantitative fieldwork at that time.
Roll forward ten years, and we now need to include four methods to cover 88% of quant fieldwork. Web has increased its share, while CATI has diminished. But in a break with the past, paper has dropped out entirely, overtaken by CAPI and ‘mixed mode’.
Perhaps surprisingly, mobile research has not yet made it to the top tier despite the attention it has attracted at events and in the research media. It means CAPI’s coming of age has slipped by almost unobserved. If current trends continue, CAPI will be displacing CATI as the second method very soon.
This quiet transformation has been driven by the advent of tablets and smartphones as consumer devices, which have brought down the price of equipping fieldworkers with CAPI-capable hardware. This, coupled with the growth of cheap, reliable data communications, has tipped the cost/benefit balance for face-to-face in favour of CAPI.
The demise of CATI has been a topic of speculation for at least the last ten years – yet it is only in the last four years that this annual survey has observed a sustained downward momentum, dropping in stages from 23% in 2011 to 13% in 2015.
The noise around mobile is not undeserved – this is a method that represents both an opportunity and a threat to market research. In this study, ‘mobile research’, i.e. surveys designed for mobile, has edged up from virtually nothing in 2009 to 5% of volume in 2015. Yet every year, this figure has been dwarfed by the number taking traditional online surveys on their smartphones, and the gap appears to be widening. Based on what people have told us in previous years of this study, many surveys are not optimised for mobile research. Furthermore, it would be hard to make them all conducive to mobile delivery, given their length and other overriding design factors such as the use of stimulus materials, and – to take a less positive view – the sheer wordiness of many survey questions.
If you’re an ESOMAR member you can read the full article in MyESOMAR in the digital copy of Research World. If you are not a member of ESOMAR you can join and receive a free copy of Research World 6 times a year or alternatively you can sign up for a subscription of the magazine in our publications store.
By Andrew Jeavons
There is no doubt that there has been a resurgence of late of academic theories of psychology being applied to market research. The most prominent example is the growing use in market research of Daniel Kahnemans’ “System 1 and System 2” cognitive processing theory. He won a Nobel Prize for it after all, although they had to give it to him for Economics as there is no Nobel Prize for Psychology. As I am sure everyone knows System 1 thinking is described as fast and intuitive thinking and System 2 as more deliberate and logical thought processes. It is one of the most influence psychological theories ever produced.
Unfortunately it comes at a time when academic psychology is having a slight crisis. A series of papers have pointed out that many psychological experiments published in journals can’t be reproduced. A paper in the journal “Nature” last year, link here, pointed out that over half of the psychological studies the authors tested failed attempts at reproduction. They tried to reproduce 98 results from 98 papers from 3 journals. All in all there were 100 reproduction attempts, only 39 were successful. That leaves 61% of the attempts as failures to reproduce. Other commentators have suggested that the real rate of reproduction failure could exceed 80%, truly alarming.
The replication of any scientific study is regarded as the hallmark of a sound experiment and theory. Yet there is a well know bias against publication of papers describing failures to replicate. Failures to replicate just aren’t very interesting and so, it is said, they are not published. The studies in the “Nature” article were all performed on the same type of population as the original work. But the truth is that a lot of psychological theories are based on the results from undergraduate psychology students, hardly representative of the real world. One of my tutors at university had a theory of an aspect of reading that worked well on a undergraduate student population. He found out, to his chagrin, that when he performed the experiments to validate his theory on the general population it failed abysmally.
50% percent of the name of the industry we work in is “research”. What are the rules of replication in market research? Does the industry embrace the idea of “failure to replicate”? I haven’t seen many studies (there are a few though) detracting from the belief in Net Promoter Score (NPS) as an excellent measure of customer satisfaction. In business everyone wants to appear positive, it is a vital approach to selling services and products. Yet this understandable bias could be inhibiting important information from being discussed. No one wants to appear to be negative, everything is always great all the time, right?
Perhaps the ability to be anonymous is the key to publishing failures to replicate in research techniques. No one can see you being negative in public, but the results can be published.
For market research there is another consideration, that of commercial interest. If you know that something doesn’t work, why tell your competitors? The corollary is not true though, it benefits any company to show how successful it is, no matter what the technique. And so we have the bias against reporting negative results. If we are in a research industry this needs to be addressed somehow.
It’s important for research on research to continue and thrive and part of that is allowing failures to replicate of market research techniques to be published. This is the difficult part though, no one wants to be negative. It’s bad for sales, but arguably good for the industry as a whole. A very prominent philosopher of science, Karl Popper, published a book called “Conjectures and Refutations”. The market research industry needs to do more refuting to move forward.
Failure is as much a part of progressing any discipline as success. Tim Hartford’s book “Adapt” is a great book to read on how failure is vital to success. Success depends on failure.
It’s a two way street.
Andrew Jeavons, Mass Cognition