Five tips to stay on the right side of law and regulation when conducting research in industries that account for 20% of research revenue.
By Katia Pallini
One of the popular formats at conferences are panel discussions; and although their start is always interesting, for some reason I always get lost along the way (you possibly have similar experiences). Keeping the entertainment level high during those discussions is a true challenge.
This is why Best of ESOMAR – Belgium wanted to try out a new format during their event which took place last Thursday December 15. The concept? Well, you could label it ‘the court case for research statements’. The audience was divided into two camps functioning as the lawyers – for each statement, one side had to come up with arguments ‘pro’, while the other side represented the ‘defense’. After both sides presented their arguments, the jury took the floor (in this case represented by three people from the client side, namely Kornel Muller from Unilever, Stan Knoops from IFF and Koen Van Parijs from Esploro), where after the judge (in this case Finn Raben, ESOMAR Director General) summarized the discussion and declared who won the round.
I must admit that, when they first introduced the concept, I secretly regretted being in the room. I joined the session expecting to sit back and relax and listen to a thought-provoking discussion. So at first the thought of playing an active role in a debate scared me off a bit. But let’s zoom in on one of the arguments presented at the event: “Are robots and AI taking over the research industry?”
Ray Poynter recently mentioned in an article that “AI will replace the worst (approximately) 75% of market research”. Linked to that, a study by Oxford University states there is 61% chance that the market research analyst jobs will be taken by robots in the next 20 years. So what were the arguments of our ‘lawyers’ with regards to this statement?
The lawyers supporting the statement reassured that the time where AI plays an important role in our industry is coming, today already the computing power and capabilities of AI are outperforming our wildest dreams and therefore the question is not if but when.
Those countering the statement argued that marketing research will always need a human touch, that the human interpretation is irreplaceable. Some of the arguments put forward were that computers have no emotions and thus will not be able to understand them and that they would fail in bringing depth and creativity to the table.
The latter was countered by the defense lawyers. They argue that researchers are often misled by emotions, so AI will allow to get an objective view on things. The lack of creativity was countered with examples of robots that are creating art work. The ‘pro’ camp also emphasized the fact that the self-learning capabilities will allow robots to grow and developed their skills.
So what was the jury’s verdict? Although they felt both sides presented interesting arguments, they were convinced of the fact that computer and robots will color our industry. I quote: “it’s coming and it will keep on coming, we can’t stop it”. Furthermore, the jury emphasized the fact that the benefit we get from it all will depend on how we take this forward and grasp this change. We do however need to be careful to not jump on the bandwagon just for the sake of automation; we will always need to focus on the objective we are aiming to achieve.
The judge’s wrap-up was clear: automation will come and it will bring the industry both time and cost efficiency. It will be a business opportunity yet will not be as perfect as we might like to think. People will always need to add value to the table.
I think this perfectly wraps up a very relevant and interesting discussion. AI will bring new opportunities to the table, there will be a shift in how we spend our time and it definitely will impact our role as researchers. In my opinion, it will become increasingly important to understand what our strengths and unique points as humans are and to identify gaps where these can make a difference when collaborating with AI. Understanding and adapting our strengths alongside AI’s growing capabilities will bring research to the next level. Our jobs will change, not disappear. Whether it’s in the shape of a physical robot or not, AI will become our partner in crime to define the future of research.
Besides this statement, the next discussions were linked to the statements “will we still measure people’s opinions with the abundance of data in this world” and “will we, researchers, focus more on activating insights than on collecting them in the future”? And with regards to the format, let me say this: although I was a bit scared and hesitant at first, this format beats the panel discussion big time. I hope to enjoy more of these in upcoming conferences.
Katia Pallini is Content Impact Manager at InSites Consulting.
How can we better incentivise respondents at time when engagement is becoming harder? Rebecca Brooks advises on the matter.
By Neda Eneva
How can we ensure our industry and profession keep up with the demands of our time and remain relevant not only in generating insights for our clients, but also in effectively impacting the lives of consumers? Last week Amsterdam hosted the latest edition of the ESOMAR Best of series, bringing Congress presentations to the local data, insights and research audience in The Netherlands. And while there wasn’t an official theme umbrella accompanying the event, there was a clear underlying message – if we wish to stay relevant we need to be able to look beyond the conventional and be brave to experiment.
The event opened with a presentation by Till Winkler from SKOPOS, Germany with an inspiring talk on the need for agile research. Are we truly in sync with the way our clients work? Drawing from his experience in the sphere of UX research, Till showcased interesting parallels between the 4 step feedback loop UX teams work with and fundamentally some of the current core requirements in working with clients not only in UX research, but also beyond. He observed that in the sphere of UX and probably most of the tech-oriented industries, the initiation of change and the generation of fact-driven decision-making have actually shifted towards the UX and tech teams themselves leading to a dynamic change in the way clients overall operate. It is the UX teams that drive the change, Till argued, and as drivers of change how do they operate? In a very agile manner, for example, adopting the popular Eric Ries’ 4 step feedback loop, which places the generation of ideas and their execution prior to measuring and generating insights. Research takes too long and it is too complex, UX experts argue and so there appears to be a gap between the model of “build, test and build again” they execute and the insights-first weighty cycles the market research industry provides. And while the clients have an increased demand for speed and continuous support, as well as high demands for specific expertise because they have fundamentally changed the way they work, is the market research industry adapting itself to their needs? Can we prove them wrong in saying that #mrx is just not worth the time and effort? To answer that, we do not even need to reinvent the wheel, Till argued, instead we could perhaps learn from the way our clients operate. To achieve that, he suggested four key action points. First, utilize technology and more specifically – achieve automation. Make use of new solutions to make your processes faster, easier and even cheaper, which can go as far as change the way you interact with clients to meet their needs. Number two, take control, ‘stop the waterfall’. Have the nerve to pivot and try something new, do not be afraid to adjust along the way. Try and experiment with new tools, such as communities for example, to achieve responsive and adaptive testing and insights’ generation. And third, rethink. And this, Till argued is a key principle in achieving a more agile way of working. What about our own UX? Have WE ensured fluid user experience and full, cross-platform integration or have we forgotten that methodology, while crucial, is only one aspect of the journey of meeting clients’ needs. Agile thinking is not something that we can or should switch on and off, it has to become a core principle in the way we work. And so Till concluded, maybe we do not need to prove UX-ers wrong, bur rather try and think differently and most of all, be brave to experiment, try something new, ask for feedback and adapt going ahead.
Going beyond working with clients and moving on to consumers, the need for adaptive action was also highlighted by the team at SKIM, and more specifically Julia Goernandt, Nijat Mammadbayli and Patricia Domiguez who presented their case-study on millennials as key brand development disruptors. And while many still fail to see the relevance, the SKIM team highlighted in a clear way the importance of this key demographic among consumers. In the US, for example, the team highlighted, there are more millennials than baby boomers and while millennials are transforming the market, brands still fail to rethink how they communicate to this generation effectively. To showcase that, a modified research method is needed in understanding millennials and the way they are adopting new technologies and are adapting consumer behavior to their own models. Julia, Nijat and Patricia presented their global case study showcasing a new research approach and namely conducting a survey on smartphones, and more importantly, asking millennials to respond in a way which comes naturally to millennials. Focusing on the telecom industry, the research addressed what millennials look for when choosing a network provider, the best way to talk to millennials, and ultimately what influences their likelihood to switch or stay with a provider. They adapted the swiping technique using ‘unspoken’ technology, which takes into account both the emotional and rational element in decision-making. The research was conducted across 3 different locations (Atlanta, London and Rotterdam) and across a 24/7 timeframe. And so, the case study paid off as they discovered interesting findings confirming key behavioral features of millennials as consumers. Millennials wish to be connected at all time, thrive to be themselves while having fun, value disruptors and are highly perceptive to visuals. And what does that mean for brands? When addressing millennials, it is important not to forget that they are open to changes in decision-making, value the basics and can be rather volatile. And while this global case study focused and highlighted this one key consumer demographic, it showcased an even more important issue – market research needs to maintain an adaptive approach when addressing different consumer groups and thus needs to adjust its methodology in generating insights accordingly.
The third speaker of the day, Nikki Lavoie, MindSpark Research International, continued the topic of connecting better with our target audiences. Where do we look for answers, she asked. Typically, market researchers adjust their methodologies through new tech or more data, but Nikki offered an exciting and rather revolutionary method – why not draw inspiration through a more cross-disciplinary method? She proposed the use of empathy, namely not only understanding and but also sharing the feelings of other people as a tool to gather insight that we understand and value. Market researchers, Nikki argued, both across the qual and quant, often don’t think twice about how they engage to get them into a focus group, for example. Are we motivating participants the best way possible and even more importantly, do we realize how detached from their experiences we are as researchers and how that potentially affects the insights and conclusions we generate from them. Nikki challenged the maxima of “if ain’t broke, don’t fix it”, urging market researchers to stop resting on their laurels, suggesting the need to go back to the foundations and shift the focus back to the recruitment process. She contested common motivation techniques, such as financial rewards, arguing that those always offer lower engagement rates and cause additional post factum issues. Using learnings from gamification theory, psychology and behavioral economics, Nikki underlined that understanding and connecting with human motivation is a lot more crucial than researchers often realize and there are a lot of limitations that can be found in un-empathetic techniques such as financial incentives. Trust, Nikki added, is another factor often getting lost in financial incentives. The moment you pay someone to answer a personal question, the relationship changes to an economic one, Nikki explained. To illustrate this, she presented a case study she conducted with a control group and a volunteer group who received different explanations as to why they were invited to participate. With the results showing that the completion rate of the task suggested was higher among the volunteer group, Nikki underlined that our ability to motivate and engage with consumers is limited at present. Are we ready to understand and share their experiences, she asked, to reach true understanding of both their shopping and participation behavior?
The final presentation by Anouar El Haji from Veylinx also focused on drawing inspiration from different fields and going beyond traditional methodologies to measure product value and perceptions. His argument was that we need to make the game as real as possible if we are to expect real insights. It is time for #mrx to go in the direction of introducing ‘skin in the game’ for true measurement of value. The problem with surveys, he argued, is that their hypothetical bias is a result of their fictitious nature. His answer to this – “Ask people to put their money where their mouth is” or, simply put, auctions. At Veylinx they have adopted a form of the Lonely but Lovely Vickrey Auction thus impacting the value by changing the positioning of the product. They set up a product valuation by setting an auction in which each participant has one anonymous vote, with the final price being the highest losing bid. By auctions, Anouar argued, the people that are willing to actually pay for a product are more clearly distinguished from the people that are not. A compelling approach, which certainly answers the call for unconventional approaches to commonly witnessed industry challenges.
And so, from auctions to empathy, from millennials to the need for being more agile, this edition of ESOMAR’s Best Of certainly didn’t disappoint in offering inspiration to the call for change from conventional research workflows. Biggest learning for me? Do not be afraid to experiment and look beyond your field when trying to push the boundaries for better insights.
P.S Did I mention that all the speakers are millennials themselves? How cool is that?!?
Neda Eneva is Marketing and Communications Manager at ESOMAR.
by Kathy Frankovic, former director of surveys at CBS News and a member of ESOMAR’s Professional Standards Committee
Election polling is the most visible part of market, opinion and social research. It carries the heavy burden for getting things right, but its previous successes have also brought high and perhaps unearned expectations for its accuracy. This year, and the U.S. presidential election in particular, provides a good example of what happens when people forget the limitations of polls, that sampling and non-response may matter, and that ascribing too much precision to polling estimates in times of change can make pundits and journalists look as silly as the pollsters they berate.
We’ve all seen the discussion about what happened in the U.S. last week: was there a “late surge,” were people misrepresenting their vote intention (the “shy Tories/Trump voters”), could pollsters have missed some important groups, did everyone put too much confidence in poll results? We have also seen claims for “new” methods to replace polling – single-bullet solutions for a problem that may or may not exist.
The precision people wanted to see in polls this year made polling aggregators and pundits far more sure of what would happen than may have been realistic. Polls do not have absolute accuracy, and even the best pundits can mis-read them. This year Nate Silver (fivethirtyeight.com), lionized after previous elections for his accuracy creating algorithms using polls to produce probabilities of the outcome, gave Democrat Hillary Clinton a nearly 70% probability of winning (to his credit, that 70% probability had dropped in the last week from a higher likelihood, but it was still a clear prediction). Other aggregators (like pollster.com) put the odds of a Clinton victory even higher, above 98%.
To be clear: nearly all final U.S. 2016 pre-election polls showed a small national lead for Clinton. And she carried the national popular vote by about two percentage points over Republican nominee Donald Trump (now with a counted two million vote lead in the national vote totals). But the national vote count (and national polls) say little about what happens in individual states, and that’s what matters. Had Clinton won the necessary Electoral College votes, we would have been having a very different discussion about polling today than we are, asking how pollsters could have done better, rather than calling the pre-election poll results a “massive, historical, epic disaster.” While there are methodological issues with the 2016 election polls, the industry should not be “reeling.”
That over-reliance on numbers made this year’s post-election commentary even more apocalyptic than necessary, as seen in the already-noted descriptions of a “reeling” profession and a “massive, historical and epic disaster.” And that’s not true. See Sean Trende of RealClearPolitics, another poll aggregator, and The New York Times’ Nate Cohn on this. Even Nate Silver has called the situation a “failure of conventional wisdom more than a failure of polling.”
The individual final pre-election polls ranged from a Clinton lead of six points to a 3-point margin for Republican victor Donald Trump. Pre-election comparisons are complicated because some polls included third party candidates (Gary Johnson for the Libertarians and Jill Stein for the Green Party) and others did not. When included, third parties received 5 to 9% combined (and have received about 5% of the actual vote). But the pre-election polls did not consistently include them. The polls also varied in estimating undecided voters. That percentage was as low as 1% and as high as 9%, depending on the polls.
The Clinton national vote win mattered little, as Trump carried Pennsylvania and Wisconsin (noting that state level polling varying widely in quality and the accuracy gap was particularly noticeable for Wisconsin). Those three states have a total of 46 electoral votes, and put Trump over the top in the electoral vote count. Just over 100,000 votes made the difference. [By contrast, Clinton leads in California with more than 3,000,000 votes: an excess of votes cast in the wrong place.]
This structural peculiarity of the American political system is not especially popular. In 2013, the Gallup Poll and others found six in ten Americans, Republicans, Democrats and independents alike, supporting the abolition of the Electoral College and instead choosing Presidents by the national popular vote. [Of course, after this election, Republicans are likely to change their minds and think the Electoral College is quite a good thing, just as they did in 2000, when Democrat Al Gore won the popular vote, but lost the Presidency to George W. Bush.]
In an election this close, there are lots of explanations. Some have nothing to do with polls. Campaigns make decisions affecting small groups of voters who are hard to track in polls. Television advertising can matter (the Trump campaign poured money into Wisconsin, while the Clinton campaign took the state for granted and the candidate herself never visited). The Trump campaign also admitted it wanted to suppress turnout of key Clinton groups (college-educated women, blacks, young liberals) by reminding them of Bill Clinton’s past womanizing and earlier Hillary Clinton statements she later disavowed. Votes cast by young voters and black voters did decline this year, and overall Clinton received far fewer votes than Barack Obama in 2012.
But pre-election polls aren’t off the hook. National polls overestimated Clinton’s popular vote by about the same amount that they underestimated Barack Obama’s margin in 2012. Many state polls in critical states, especially in the Midwest, were off by more, and had Clinton clearly ahead in states that Trump carried.
An American Association for Public Opinion Research (AAPOR) panel, formed before the election, will review the election polls. Like the British Polling Council panel following the 2015 general election in the UK, its results won’t be available for several months, but serious post-election investigations (beginning with the 1948 report that followed the election that gave us “Dewey Defeats Truman”) nearly always suggest worthwhile improvements in methodology, which many pollsters adopt. Those suggestions are often adopted. Pollsters themselves will be conducting internal reviews, to see if they can match the results even more closely. Any systematic error will be known and – as happens all the time – learned from.
WHAT WE ALREADY KNOW
But we know some things now.
There was a late surge. This year, the exit polls show movement towards Trump nationally and in critical states in the final days before the election (CNN provides an excellent set of tabulations). Across the country, Clinton led by two points among those who made up their minds before the last week of the campaign, and lost to Trump by five among the 13% who made up their minds in the last week. And more than 10% of those who decided in the last week didn’t choose either Trump or Clinton. Similarly, about 10% of voters in the three important Rust Belt states decided in the final days, and Trump decisively led with them: by 11 points in Michigan, 16 points in Pennsylvania, and 27 points in Wisconsin.
11 days before the election, FBI Director James Comey told Congress he was reopening the investigation into Clinton’s private email server. One week later, he said that there was nothing new, putting that issue which had long bedeviled Clinton back before the public, after it may have receded from most voters’ minds. The shift was missed by the polls. Many state polls were completed days before the election, before the full impact of these events could be measured.
There may have been shy Trump voters: Many polls saw little or no change in the last week, though the ABC News/Washington Post tracking poll showed movement first towards and then away from Trump. Its final poll matched the polling average. Could that final movement of voters in the last week to Trump, as indicated in the exit polls, be an indication that some voters felt uncomfortable announcing a vote for Trump earlier? So far there is no direct evidence for it, and there are few differences between telephone and online polls in general ion Trump support.
Did pollsters interview good samples?: Trump suppression efforts, noted earlier, may have turned some “likely voters” into no-shows. Other voters may not have been in the polls at all. This year, there was not just a gender gap, but also a race gap, a marriage gap, an age gap, a religious gap, a rural-urban gap, and an education gap, particularly amongst white voters. Those less educated white voters overwhelmingly supported Donald Trump, and if they were missing from the polls, it was Trump voters who were missing.
Exit polls have a known education and age response bias (perhaps not a surprise when those polls require respondents to fill out paper questionnaires), and it is easy to speculate that at least some less-educated voters could have been absent in pre-election polls of all types.
Years ago, we learned that young people, minorities and urban residents (in other words, people who move frequently) were most likely to have only mobile phones, not landlines. Polls with samples of mobile phone numbers were better at gauging support for Barack Obama. Mobile phones are a routine part of telephone polling,
Single-digit response rates for telephone surveys means more weighting and modeling, and that increases the possibility of error. Online polls have coverage issues, lack the scientific justification of probability sampling, and require significant modeling, but this year they performed as well, or even better, than phone polls. (This is quite different from recent British examples – the 2015 election and the referendum on whether or not the United Kingdom should exit the European Union)
The “Gold Standard” — probability telephone surveys — might be better called the “Silver Standard” as we have seen it can be tarnished and needs to be frequently reviewed and polished. Achieving that “gold standard” requires significant time and energy to reach potential respondents, but the days and weeks that can take limits the news value of polls, and would cost much more than news organizations today are willing – and able – to spend.
There probably is no replacement for the survey questionnaire, no silver bullet. Big data helps target groups, but as dependent on data collection as it is, even it may not be able to measure the exact size of each.
The problem was Interpretation: This year’s real problem was the interpretation problem, an error committed by both pollsters and pundits, both before and after the election. Maybe it’s more accurate to call it the over-interpretation problem.
Pollsters overpromise. They cite data that shows how accurate they were in the past when it may very well have been only that they were lucky. They don’t manage expectations, and violate the truth of what they know – that polling (and all survey research) is subject to error. They give into the temptation to report a 2-point, 3-point, or 4-point margin as a clear lead (and I am not blameless here).
And then reporters believe them – or decide on their own to think polls are super-predictors. But a national poll says little about what will happen in Wisconsin. The election horserace is news, and that is not going to change. But reporting could be a lot better, and poll results expressed with less certainty. . [There may have been some improvement over the years. In 1948, Life Magazine described Thomas Dewey as the “next president” in its pre-election issue. But Newsweek’s pre-printed, pre-distributed and then-recalled commemorative issue featuring “Madam President” is now on sale on EBay.]
We have to do a better job in talking about polls and training journalists. Just this year ESOMAR joined with AAPOR and WAPOR (the World Association for Public Opinion Research) and worked with the Poynter Institute to produce an internationally-focused online course for journalists and will promote the course in France, the Netherlands and Germany especially taking into account upcoming elections.
Much about this election can be explained, but pollsters still have a lot to answer for. So do the rest of us, who forgot polls are only estimates and can be wrong. We must make sure that conductors, exponents and commentators of this most public face of research provide realistic estimates, and do not expect to provide a Rolls Royce for the price of a Ford.
ONLINE COURSE FOR JOURNALISTS: UNDERSTANDING AND INTERPRETING OPINION POLLS
AAPOR, ESOMAR and WAPOR have launched the first-ever international online course to help journalists improve media reporting about polls and survey results.
Aimed at journalists, media students, bloggers, voters and anyone who wants to know how and why polls are conducted, the course is hosted by Poynter, an online training source for journalists.
This course will help journalists understand and interpret opinion polls. It will enable them to assess poll quality and explain why polls covering the same election can produce different results and why the outcome of an election might deviate from the result ‘predicted’ by the polls.
Developed by an international expert team, and funded by ESOMAR, WAPOR and AAPOR the course is free of charge. Go to:
For more information contact:
Professional.email@example.com or firstname.lastname@example.org
Kathy Frankovic is a polling consultant and former director of surveys at CBS News and a member of ESOMAR’s Professional Standards Committee