By Anneke Quinn-de Jong
Hooray, we have achieved it! Research and consumer centricity have firmly established themselves as core to the innovation process. With the rise of a new and provocative innovation mind-set that works towards soft launches rather than big bang launches, consumer feedback has become an essential part of the pivoting process.
By Julie Aebersold
If you believe the general public considers market research to be the centre of business decisions, mainstream media and politics, then you’ve got another thing coming.
by Kathy Frankovic, former director of surveys at CBS News and a member of ESOMAR’s Professional Standards Committee
Election polling is the most visible part of market, opinion and social research. It carries the heavy burden for getting things right, but its previous successes have also brought high and perhaps unearned expectations for its accuracy. This year, and the U.S. presidential election in particular, provides a good example of what happens when people forget the limitations of polls, that sampling and non-response may matter, and that ascribing too much precision to polling estimates in times of change can make pundits and journalists look as silly as the pollsters they berate.
We’ve all seen the discussion about what happened in the U.S. last week: was there a “late surge,” were people misrepresenting their vote intention (the “shy Tories/Trump voters”), could pollsters have missed some important groups, did everyone put too much confidence in poll results? We have also seen claims for “new” methods to replace polling – single-bullet solutions for a problem that may or may not exist.
The precision people wanted to see in polls this year made polling aggregators and pundits far more sure of what would happen than may have been realistic. Polls do not have absolute accuracy, and even the best pundits can mis-read them. This year Nate Silver (fivethirtyeight.com), lionized after previous elections for his accuracy creating algorithms using polls to produce probabilities of the outcome, gave Democrat Hillary Clinton a nearly 70% probability of winning (to his credit, that 70% probability had dropped in the last week from a higher likelihood, but it was still a clear prediction). Other aggregators (like pollster.com) put the odds of a Clinton victory even higher, above 98%.
To be clear: nearly all final U.S. 2016 pre-election polls showed a small national lead for Clinton. And she carried the national popular vote by about two percentage points over Republican nominee Donald Trump (now with a counted two million vote lead in the national vote totals). But the national vote count (and national polls) say little about what happens in individual states, and that’s what matters. Had Clinton won the necessary Electoral College votes, we would have been having a very different discussion about polling today than we are, asking how pollsters could have done better, rather than calling the pre-election poll results a “massive, historical, epic disaster.” While there are methodological issues with the 2016 election polls, the industry should not be “reeling.”
That over-reliance on numbers made this year’s post-election commentary even more apocalyptic than necessary, as seen in the already-noted descriptions of a “reeling” profession and a “massive, historical and epic disaster.” And that’s not true. See Sean Trende of RealClearPolitics, another poll aggregator, and The New York Times’ Nate Cohn on this. Even Nate Silver has called the situation a “failure of conventional wisdom more than a failure of polling.”
The individual final pre-election polls ranged from a Clinton lead of six points to a 3-point margin for Republican victor Donald Trump. Pre-election comparisons are complicated because some polls included third party candidates (Gary Johnson for the Libertarians and Jill Stein for the Green Party) and others did not. When included, third parties received 5 to 9% combined (and have received about 5% of the actual vote). But the pre-election polls did not consistently include them. The polls also varied in estimating undecided voters. That percentage was as low as 1% and as high as 9%, depending on the polls.
The Clinton national vote win mattered little, as Trump carried Pennsylvania and Wisconsin (noting that state level polling varying widely in quality and the accuracy gap was particularly noticeable for Wisconsin). Those three states have a total of 46 electoral votes, and put Trump over the top in the electoral vote count. Just over 100,000 votes made the difference. [By contrast, Clinton leads in California with more than 3,000,000 votes: an excess of votes cast in the wrong place.]
This structural peculiarity of the American political system is not especially popular. In 2013, the Gallup Poll and others found six in ten Americans, Republicans, Democrats and independents alike, supporting the abolition of the Electoral College and instead choosing Presidents by the national popular vote. [Of course, after this election, Republicans are likely to change their minds and think the Electoral College is quite a good thing, just as they did in 2000, when Democrat Al Gore won the popular vote, but lost the Presidency to George W. Bush.]
In an election this close, there are lots of explanations. Some have nothing to do with polls. Campaigns make decisions affecting small groups of voters who are hard to track in polls. Television advertising can matter (the Trump campaign poured money into Wisconsin, while the Clinton campaign took the state for granted and the candidate herself never visited). The Trump campaign also admitted it wanted to suppress turnout of key Clinton groups (college-educated women, blacks, young liberals) by reminding them of Bill Clinton’s past womanizing and earlier Hillary Clinton statements she later disavowed. Votes cast by young voters and black voters did decline this year, and overall Clinton received far fewer votes than Barack Obama in 2012.
But pre-election polls aren’t off the hook. National polls overestimated Clinton’s popular vote by about the same amount that they underestimated Barack Obama’s margin in 2012. Many state polls in critical states, especially in the Midwest, were off by more, and had Clinton clearly ahead in states that Trump carried.
An American Association for Public Opinion Research (AAPOR) panel, formed before the election, will review the election polls. Like the British Polling Council panel following the 2015 general election in the UK, its results won’t be available for several months, but serious post-election investigations (beginning with the 1948 report that followed the election that gave us “Dewey Defeats Truman”) nearly always suggest worthwhile improvements in methodology, which many pollsters adopt. Those suggestions are often adopted. Pollsters themselves will be conducting internal reviews, to see if they can match the results even more closely. Any systematic error will be known and – as happens all the time – learned from.
WHAT WE ALREADY KNOW
But we know some things now.
There was a late surge. This year, the exit polls show movement towards Trump nationally and in critical states in the final days before the election (CNN provides an excellent set of tabulations). Across the country, Clinton led by two points among those who made up their minds before the last week of the campaign, and lost to Trump by five among the 13% who made up their minds in the last week. And more than 10% of those who decided in the last week didn’t choose either Trump or Clinton. Similarly, about 10% of voters in the three important Rust Belt states decided in the final days, and Trump decisively led with them: by 11 points in Michigan, 16 points in Pennsylvania, and 27 points in Wisconsin.
11 days before the election, FBI Director James Comey told Congress he was reopening the investigation into Clinton’s private email server. One week later, he said that there was nothing new, putting that issue which had long bedeviled Clinton back before the public, after it may have receded from most voters’ minds. The shift was missed by the polls. Many state polls were completed days before the election, before the full impact of these events could be measured.
There may have been shy Trump voters: Many polls saw little or no change in the last week, though the ABC News/Washington Post tracking poll showed movement first towards and then away from Trump. Its final poll matched the polling average. Could that final movement of voters in the last week to Trump, as indicated in the exit polls, be an indication that some voters felt uncomfortable announcing a vote for Trump earlier? So far there is no direct evidence for it, and there are few differences between telephone and online polls in general ion Trump support.
Did pollsters interview good samples?: Trump suppression efforts, noted earlier, may have turned some “likely voters” into no-shows. Other voters may not have been in the polls at all. This year, there was not just a gender gap, but also a race gap, a marriage gap, an age gap, a religious gap, a rural-urban gap, and an education gap, particularly amongst white voters. Those less educated white voters overwhelmingly supported Donald Trump, and if they were missing from the polls, it was Trump voters who were missing.
Exit polls have a known education and age response bias (perhaps not a surprise when those polls require respondents to fill out paper questionnaires), and it is easy to speculate that at least some less-educated voters could have been absent in pre-election polls of all types.
Years ago, we learned that young people, minorities and urban residents (in other words, people who move frequently) were most likely to have only mobile phones, not landlines. Polls with samples of mobile phone numbers were better at gauging support for Barack Obama. Mobile phones are a routine part of telephone polling,
Single-digit response rates for telephone surveys means more weighting and modeling, and that increases the possibility of error. Online polls have coverage issues, lack the scientific justification of probability sampling, and require significant modeling, but this year they performed as well, or even better, than phone polls. (This is quite different from recent British examples – the 2015 election and the referendum on whether or not the United Kingdom should exit the European Union)
The “Gold Standard” — probability telephone surveys — might be better called the “Silver Standard” as we have seen it can be tarnished and needs to be frequently reviewed and polished. Achieving that “gold standard” requires significant time and energy to reach potential respondents, but the days and weeks that can take limits the news value of polls, and would cost much more than news organizations today are willing – and able – to spend.
There probably is no replacement for the survey questionnaire, no silver bullet. Big data helps target groups, but as dependent on data collection as it is, even it may not be able to measure the exact size of each.
The problem was Interpretation: This year’s real problem was the interpretation problem, an error committed by both pollsters and pundits, both before and after the election. Maybe it’s more accurate to call it the over-interpretation problem.
Pollsters overpromise. They cite data that shows how accurate they were in the past when it may very well have been only that they were lucky. They don’t manage expectations, and violate the truth of what they know – that polling (and all survey research) is subject to error. They give into the temptation to report a 2-point, 3-point, or 4-point margin as a clear lead (and I am not blameless here).
And then reporters believe them – or decide on their own to think polls are super-predictors. But a national poll says little about what will happen in Wisconsin. The election horserace is news, and that is not going to change. But reporting could be a lot better, and poll results expressed with less certainty. . [There may have been some improvement over the years. In 1948, Life Magazine described Thomas Dewey as the “next president” in its pre-election issue. But Newsweek’s pre-printed, pre-distributed and then-recalled commemorative issue featuring “Madam President” is now on sale on EBay.]
We have to do a better job in talking about polls and training journalists. Just this year ESOMAR joined with AAPOR and WAPOR (the World Association for Public Opinion Research) and worked with the Poynter Institute to produce an internationally-focused online course for journalists and will promote the course in France, the Netherlands and Germany especially taking into account upcoming elections.
Much about this election can be explained, but pollsters still have a lot to answer for. So do the rest of us, who forgot polls are only estimates and can be wrong. We must make sure that conductors, exponents and commentators of this most public face of research provide realistic estimates, and do not expect to provide a Rolls Royce for the price of a Ford.
ONLINE COURSE FOR JOURNALISTS: UNDERSTANDING AND INTERPRETING OPINION POLLS
AAPOR, ESOMAR and WAPOR have launched the first-ever international online course to help journalists improve media reporting about polls and survey results.
Aimed at journalists, media students, bloggers, voters and anyone who wants to know how and why polls are conducted, the course is hosted by Poynter, an online training source for journalists.
This course will help journalists understand and interpret opinion polls. It will enable them to assess poll quality and explain why polls covering the same election can produce different results and why the outcome of an election might deviate from the result ‘predicted’ by the polls.
Developed by an international expert team, and funded by ESOMAR, WAPOR and AAPOR the course is free of charge. Go to:
For more information contact:
Professional.firstname.lastname@example.org or email@example.com
Kathy Frankovic is a polling consultant and former director of surveys at CBS News and a member of ESOMAR’s Professional Standards Committee
By Stephanie Alaimo
Relating to our consumers – who we can sometimes forget to regard as People – emerged as an important theme in Monday’s presentations. Taken together, the following presentations argue that generating greater empathy, which requires more authentic interactions with our research subjects, should be one of our most important goals as market researchers. Importantly, we must not forget the amount of work we are asking our respondents to complete, we must use methodologies that situate people within their daily lives so that we do not neglect context, and we must understand the impact of consumer goods on people’s lives. Several of Monday’s presenters began to swap the word “consumer” for the word “person” or “people.” This is a very effective change in MR language which should certainly be strong enough to remind us all to contextualize and humanize our research about…. People!
Empathy, the ability to deeply relate to and share the experience of others, seemed to be the greatest expression of this change. Thomas Troch of InSites Consulting USA, in his talk “Enter the Experience Economy: Increasing memory and empathy to drive change” noted that empathy is the real driver of change resulting from market research. In his presentation, he frequently reminded us to consider consumers as people – because they are. In the experience economy, where people (not just consumers!) are motivated by new experiences, we must be willing and able to capture and relate to the fullness and richness of human experience, even as we interact with consumer goods and services. Thomas used 3D footage of himself brushing his teeth to show that we can gain an increased ability to relate through immersive methodologies.
Equally importantly, we must shift our thinking about what it is that we provide to our clients. Our empathy should not be reserved for the people we study. Of course, we want to provide the most accessible knowledge possible to our clients. We do, after all, seek to provide a service. Why is a service more valuable? When we provide raw data, we make our research into a simple, untransformed commodity. By providing a more accessible report, we transform it into a good, which has greater value. Finally, when we provide workshops and presentations for our clients, we provide a truly valuable service. The interactive service of a workshop or presentation more closely mirrors the types of interactive and complete research that we should strive for, in order to increase our own empathy for the people we study. Finally, when we provide a workshop, we can most effectively transmit the empathy that we have learned to our clients. This will be the greatest driver of impact. It is also the most empathetic way to meet our clients needs for understanding.
Some of the most surprising findings presented in Monday’s workshops were those presented by Nikki Lavoie, of MindSpark International. Lavoie also discussed empathy in her talk “Connecting With Consumers: A New Way of Plugging In: Why empathy is the emotional trailblazer in the world of social media and screens.” She emphasized the need to understand what drives a person to participate in market research. Of course, our most habitual, quickest answer would be “an incentive.” Lavoie questions this assumption. Drawing on research she conducted evaluating the effects of incentives on participation and the quality of responses, as well as on behavioral research, she notes that financial incentives greatly change the nature of any interaction we might have. She found that non-incentivized, volunteers that participate in research participated in research just as completely as incentivized participants. And, most shockingly, they dropped out less. These respondents were motivated by – guess what – empathy. Volunteers are typically motivated by the desire to do something nice for someone, or to do something nice for a community, or the desire to improve the world. Perhaps we can connect to people more easily, as a profession, if we remember all the directions in which empathy can flow, and in turn, seek to encourage empathy in all of our interactions.
Empathy was also invoked by Luke Sehmer of Research Now UK, and Melanie Courtright of Research Now US. In their talk “Clipboards, Calls and Focus Groupies: The public perception of market research and the implications for the future” they alerted us to the fact that many people do not trust market research, or market researchers. What a finding! To nuance this point, they found that in research where a participant directly engages with a researcher on the telephone, there is more trust. Research where the person does not have direct contact, such as a Google survey, scores much lower for trust. But, this is not surprising, when we think about empathy. Empathy is relating to and sharing experiences and emotions. It is a very human thing. A Google survey lacks most of the tools required to generate empathy. And empathy can generate trust. So, empathy may be the solution to trust for our industry. We must solve for trust if we are going to expect people to discuss their lives with us. Otherwise, we risk very low quality results.
How do empathy and understanding relate? Well, if we cannot empathize with people, we simply cannot understand them. They may answer our questions, participate in our exercises, or even give us their opinions. But, if we cannot contextualize their lived experiences, if we cannot situation our questions within the systems of their lives, the insights that we can draw from their answers are likely to remain extremely superficial. Even worse, we may simply miss very important conclusions by our failure to relate. And for our clients, when change and innovation are driven by empathy, they are more likely to be solutions which will relate directly to people’s needs, desires, and lives.
Stephanie Alaimo is one of the official RWC bloggers for Congress 2016.
By Rebecca Heaney
The key takeaways from Day #1 of Congress are conveniently aligned with the highlights of yesterday’s sessions, which impressed on me the importance of remembering that respondents are people – a thought that is completely obvious and at the same time easily and quickly forgotten in our zeal to collect data and deliver insights. In fact, it is commonplace to go through the stages of an entire research project without once ever acknowledging the people that participated in it – focusing instead on sample sizes and representativeness, KPIs and indexes, trends and outliers in the data, insights and implications for business. It’s easy to forget that, as Luke Sehmer pointedly reminded us in one of today’s sessions, the entire market research industry is completely reliant on real people participating in, or at the very least giving us permission to, conduct research. It’s a humbling realization to admit that the participants we so often complain about for not paying enough attention to surveys or not giving us complex enough answers are, in fact, the people we are indebted to.
Without them, market research would not exist, so how can we conduct research keeping participants in mind?
Shorter Surveys, Less Repetition, and Mobile Optimization
As already touched on in yesterday’s blog, mindfully designing shorter surveys and reducing repetition is one approach. In the session titled “Taste the Feeling of a New Brand Tracking Ecosystem”, Clare-Marie Hulsey (of the Coca-Cola Company) presented a case study showing how drastic changes (including asking some questions some of the time, instead of all KPIs all of the time) were made to their large, multi-market brand tracker to make the tracker more respondent friendly with many positive outcomes. Martin Dimov (GemSeek) and Steve Wigmore (Lightspeed) were also challenged with reducing survey length in a large scale project. In their talk “A Quantum Leap for the Research Industry”, Dimov and Wigmore described how they were able to cut LOI significantly by asking respondents to complete subsets of questions (rather than the whole list, resulting in a lot of missing data at the respondent level) and using “Ascription” – a solution that uses data science to predict what the answers for would be and merging those predictions with actual data.
Making Surveys More Interesting and Engaging
While a shorter survey is better than a longer one, we all know that short survey does not equal a good survey. Another approach to designing research with the respondent experience in mind is to make the surveys themselves more engaging. Gamification has been mentioned multiple times today as one way of engaging participants while the use of photos, voice, and videos were discussed in Jason Morris (Millward Brown), Sherri Stevens (Millward Brown), and Stefan Kuegler’s (Lightspeed) presentation, “Respondent Engagement: Investing in Stickiness”. With the variety of technological solutions and platforms available today that make collecting and analyzing digital content relatively easy (many of which are being presented at the Exhibition), these options are becoming increasingly more viable for regular use.
Motivations for Participating in Research
Luke Sehmer and Melanie Courtright (Research Now) and Nikki Lavoie (MindSpark Research International) took a step back from thinking about engaging research design to think about why people are motivated to participate in market research in the first place. In the session, “Clipboards, Calls and Focus Groupies: The public perception of market research and the implications for the future”, Sehmer and Courtright claim that people participate in market research for three reasons: to give back, to get something back, and to give yourself a pat on the back. Drawing on social psychology research, Lavoie explained in her presentation, “Connecting with Consumers: A New Way of Plugging In”, how offering financial incentives (inducing a “to get something back” motivation) can undermine intrinsic motivation for participation and has negative implications for engagement.
The Relationship Between Market Researchers and Research Participants
The theme of one of today’s sessions was “Wow! We’re raising the game: When status quo is not an option”. I think this phrase perfectly describes where we are at when it comes to participant engagement and experience. While there are many compelling reasons to care about the respondent experience and participant engagement (e.g., lengthy, repetitive surveys cost more, take longer to field, and elicit less accurate, and less rich, data), we shouldn’t care only about how these issues affect our bottom line. We need to think about the implications of poor respondent experience on the market research industry as a whole – how each study is one touchpoint that influences how people see the industry and respond to us. The consequences of neglecting the needs of our participants are too important to ignore. We need to start thinking of participants as partners instead of subjects, seeing them as people with context and concerns and lives outside of research, making their needs just (if not more) important as those of our clients, because we, as an industry, can’t survive without them.
Rebecca Heaney, Northstar Research, is one of the official RWC bloggers for ESOMAR Congress 2016.