by Kathy Frankovic, former director of surveys at CBS News and a member of ESOMAR’s Professional Standards Committee
Election polling is the most visible part of market, opinion and social research. It carries the heavy burden for getting things right, but its previous successes have also brought high and perhaps unearned expectations for its accuracy. This year, and the U.S. presidential election in particular, provides a good example of what happens when people forget the limitations of polls, that sampling and non-response may matter, and that ascribing too much precision to polling estimates in times of change can make pundits and journalists look as silly as the pollsters they berate.
We’ve all seen the discussion about what happened in the U.S. last week: was there a “late surge,” were people misrepresenting their vote intention (the “shy Tories/Trump voters”), could pollsters have missed some important groups, did everyone put too much confidence in poll results? We have also seen claims for “new” methods to replace polling – single-bullet solutions for a problem that may or may not exist.
The precision people wanted to see in polls this year made polling aggregators and pundits far more sure of what would happen than may have been realistic. Polls do not have absolute accuracy, and even the best pundits can mis-read them. This year Nate Silver (fivethirtyeight.com), lionized after previous elections for his accuracy creating algorithms using polls to produce probabilities of the outcome, gave Democrat Hillary Clinton a nearly 70% probability of winning (to his credit, that 70% probability had dropped in the last week from a higher likelihood, but it was still a clear prediction). Other aggregators (like pollster.com) put the odds of a Clinton victory even higher, above 98%.
To be clear: nearly all final U.S. 2016 pre-election polls showed a small national lead for Clinton. And she carried the national popular vote by about two percentage points over Republican nominee Donald Trump (now with a counted two million vote lead in the national vote totals). But the national vote count (and national polls) say little about what happens in individual states, and that’s what matters. Had Clinton won the necessary Electoral College votes, we would have been having a very different discussion about polling today than we are, asking how pollsters could have done better, rather than calling the pre-election poll results a “massive, historical, epic disaster.” While there are methodological issues with the 2016 election polls, the industry should not be “reeling.”
That over-reliance on numbers made this year’s post-election commentary even more apocalyptic than necessary, as seen in the already-noted descriptions of a “reeling” profession and a “massive, historical and epic disaster.” And that’s not true. See Sean Trende of RealClearPolitics, another poll aggregator, and The New York Times’ Nate Cohn on this. Even Nate Silver has called the situation a “failure of conventional wisdom more than a failure of polling.”
The individual final pre-election polls ranged from a Clinton lead of six points to a 3-point margin for Republican victor Donald Trump. Pre-election comparisons are complicated because some polls included third party candidates (Gary Johnson for the Libertarians and Jill Stein for the Green Party) and others did not. When included, third parties received 5 to 9% combined (and have received about 5% of the actual vote). But the pre-election polls did not consistently include them. The polls also varied in estimating undecided voters. That percentage was as low as 1% and as high as 9%, depending on the polls.
The Clinton national vote win mattered little, as Trump carried Pennsylvania and Wisconsin (noting that state level polling varying widely in quality and the accuracy gap was particularly noticeable for Wisconsin). Those three states have a total of 46 electoral votes, and put Trump over the top in the electoral vote count. Just over 100,000 votes made the difference. [By contrast, Clinton leads in California with more than 3,000,000 votes: an excess of votes cast in the wrong place.]
This structural peculiarity of the American political system is not especially popular. In 2013, the Gallup Poll and others found six in ten Americans, Republicans, Democrats and independents alike, supporting the abolition of the Electoral College and instead choosing Presidents by the national popular vote. [Of course, after this election, Republicans are likely to change their minds and think the Electoral College is quite a good thing, just as they did in 2000, when Democrat Al Gore won the popular vote, but lost the Presidency to George W. Bush.]
In an election this close, there are lots of explanations. Some have nothing to do with polls. Campaigns make decisions affecting small groups of voters who are hard to track in polls. Television advertising can matter (the Trump campaign poured money into Wisconsin, while the Clinton campaign took the state for granted and the candidate herself never visited). The Trump campaign also admitted it wanted to suppress turnout of key Clinton groups (college-educated women, blacks, young liberals) by reminding them of Bill Clinton’s past womanizing and earlier Hillary Clinton statements she later disavowed. Votes cast by young voters and black voters did decline this year, and overall Clinton received far fewer votes than Barack Obama in 2012.
But pre-election polls aren’t off the hook. National polls overestimated Clinton’s popular vote by about the same amount that they underestimated Barack Obama’s margin in 2012. Many state polls in critical states, especially in the Midwest, were off by more, and had Clinton clearly ahead in states that Trump carried.
An American Association for Public Opinion Research (AAPOR) panel, formed before the election, will review the election polls. Like the British Polling Council panel following the 2015 general election in the UK, its results won’t be available for several months, but serious post-election investigations (beginning with the 1948 report that followed the election that gave us “Dewey Defeats Truman”) nearly always suggest worthwhile improvements in methodology, which many pollsters adopt. Those suggestions are often adopted. Pollsters themselves will be conducting internal reviews, to see if they can match the results even more closely. Any systematic error will be known and – as happens all the time – learned from.
WHAT WE ALREADY KNOW
But we know some things now.
There was a late surge. This year, the exit polls show movement towards Trump nationally and in critical states in the final days before the election (CNN provides an excellent set of tabulations). Across the country, Clinton led by two points among those who made up their minds before the last week of the campaign, and lost to Trump by five among the 13% who made up their minds in the last week. And more than 10% of those who decided in the last week didn’t choose either Trump or Clinton. Similarly, about 10% of voters in the three important Rust Belt states decided in the final days, and Trump decisively led with them: by 11 points in Michigan, 16 points in Pennsylvania, and 27 points in Wisconsin.
11 days before the election, FBI Director James Comey told Congress he was reopening the investigation into Clinton’s private email server. One week later, he said that there was nothing new, putting that issue which had long bedeviled Clinton back before the public, after it may have receded from most voters’ minds. The shift was missed by the polls. Many state polls were completed days before the election, before the full impact of these events could be measured.
There may have been shy Trump voters: Many polls saw little or no change in the last week, though the ABC News/Washington Post tracking poll showed movement first towards and then away from Trump. Its final poll matched the polling average. Could that final movement of voters in the last week to Trump, as indicated in the exit polls, be an indication that some voters felt uncomfortable announcing a vote for Trump earlier? So far there is no direct evidence for it, and there are few differences between telephone and online polls in general ion Trump support.
Did pollsters interview good samples?: Trump suppression efforts, noted earlier, may have turned some “likely voters” into no-shows. Other voters may not have been in the polls at all. This year, there was not just a gender gap, but also a race gap, a marriage gap, an age gap, a religious gap, a rural-urban gap, and an education gap, particularly amongst white voters. Those less educated white voters overwhelmingly supported Donald Trump, and if they were missing from the polls, it was Trump voters who were missing.
Exit polls have a known education and age response bias (perhaps not a surprise when those polls require respondents to fill out paper questionnaires), and it is easy to speculate that at least some less-educated voters could have been absent in pre-election polls of all types.
Years ago, we learned that young people, minorities and urban residents (in other words, people who move frequently) were most likely to have only mobile phones, not landlines. Polls with samples of mobile phone numbers were better at gauging support for Barack Obama. Mobile phones are a routine part of telephone polling,
Single-digit response rates for telephone surveys means more weighting and modeling, and that increases the possibility of error. Online polls have coverage issues, lack the scientific justification of probability sampling, and require significant modeling, but this year they performed as well, or even better, than phone polls. (This is quite different from recent British examples – the 2015 election and the referendum on whether or not the United Kingdom should exit the European Union)
The “Gold Standard” — probability telephone surveys — might be better called the “Silver Standard” as we have seen it can be tarnished and needs to be frequently reviewed and polished. Achieving that “gold standard” requires significant time and energy to reach potential respondents, but the days and weeks that can take limits the news value of polls, and would cost much more than news organizations today are willing – and able – to spend.
There probably is no replacement for the survey questionnaire, no silver bullet. Big data helps target groups, but as dependent on data collection as it is, even it may not be able to measure the exact size of each.
The problem was Interpretation: This year’s real problem was the interpretation problem, an error committed by both pollsters and pundits, both before and after the election. Maybe it’s more accurate to call it the over-interpretation problem.
Pollsters overpromise. They cite data that shows how accurate they were in the past when it may very well have been only that they were lucky. They don’t manage expectations, and violate the truth of what they know – that polling (and all survey research) is subject to error. They give into the temptation to report a 2-point, 3-point, or 4-point margin as a clear lead (and I am not blameless here).
And then reporters believe them – or decide on their own to think polls are super-predictors. But a national poll says little about what will happen in Wisconsin. The election horserace is news, and that is not going to change. But reporting could be a lot better, and poll results expressed with less certainty. . [There may have been some improvement over the years. In 1948, Life Magazine described Thomas Dewey as the “next president” in its pre-election issue. But Newsweek’s pre-printed, pre-distributed and then-recalled commemorative issue featuring “Madam President” is now on sale on EBay.]
We have to do a better job in talking about polls and training journalists. Just this year ESOMAR joined with AAPOR and WAPOR (the World Association for Public Opinion Research) and worked with the Poynter Institute to produce an internationally-focused online course for journalists and will promote the course in France, the Netherlands and Germany especially taking into account upcoming elections.
Much about this election can be explained, but pollsters still have a lot to answer for. So do the rest of us, who forgot polls are only estimates and can be wrong. We must make sure that conductors, exponents and commentators of this most public face of research provide realistic estimates, and do not expect to provide a Rolls Royce for the price of a Ford.
ONLINE COURSE FOR JOURNALISTS: UNDERSTANDING AND INTERPRETING OPINION POLLS
AAPOR, ESOMAR and WAPOR have launched the first-ever international online course to help journalists improve media reporting about polls and survey results.
Aimed at journalists, media students, bloggers, voters and anyone who wants to know how and why polls are conducted, the course is hosted by Poynter, an online training source for journalists.
This course will help journalists understand and interpret opinion polls. It will enable them to assess poll quality and explain why polls covering the same election can produce different results and why the outcome of an election might deviate from the result ‘predicted’ by the polls.
Developed by an international expert team, and funded by ESOMAR, WAPOR and AAPOR the course is free of charge. Go to:
For more information contact:
Professional.firstname.lastname@example.org or email@example.com
Kathy Frankovic is a polling consultant and former director of surveys at CBS News and a member of ESOMAR’s Professional Standards Committee
A New Privacy Shield will be in place to protect EU/US data flows starting 1 August 2016
By Kim Smouter
A MAJOR RELIEF?
Europe and the United States have announced that they have come to an agreement on the replacement mechanism to the EU/US Safe Harbour. The Safe Harbour Scheme had been struck down by the European Court of Justice last year forcing European and American authorities to scramble and setup a replacement mechanism allowing the free flow of data between the world’s two largest data markets.
In February, authorities had announced the Privacy Shield which sought to address the European Court of Justice’s opposition to indiscriminate mass surveillance on Europeans and also the inequivalent level of redress afforded to Europeans. But following negative feedback from European politicians, and European and national data protection authorities about the new scheme, it was uncertain the Privacy Shield would ever see the light of day.
Companies wishing to sign up to the new Privacy Shield will be invited to do so starting 1 August 2016, noting that at the moment data transfers using the old Safe Harbor are illegal and subject to enforcement actions. German Data Protection Authorities have already begun issuing fines for companies who are still transferring data using the old scheme.
THE PRIVACY SHIELD SURVIVES SCRUTINY AND POLITICAL OPPOSITION
So, despite political opposition to the new Shield, representatives of EU Member States and the European Commission gave their final nod of approval to the proposed scheme. A new version of the text was prepared to address the negative reviews of the national data protection authorities and the European Data Protection Supervisor who will eventually have enforcement responsibility over the scheme.
The Privacy Shield is a slightly different animal from its predecessor, but for those involved in the previous scheme it should be seen as an evolution of the pre-existing requirements.
Nonetheless, there are a number of changes to highlight from the perspective of a company including:
STRICTER NOTIFICATION REQUIREMENTS
- The Privacy Shield requires additional information be provided to individuals in the Notice Principle, including a declaration of the organization’s participation in the Privacy Shield, a statement of the individual’s right to access personal data, and the identification of the relevant independent dispute resolution body;
STRICTER CONTRACTUAL REQUIREMENTS
- The Privacy Shield strengthens protection of personal data that is transferred from a Privacy Shield organization to a third party controller by requiring contracts that provides that personal data may only be processed for limited and specified purposes consistent with the consent provided by the individual and that the recipient will provide the same level of protection as the Principles;
GREATER EMPHASIS ON DATA CHAIN RESPONSIBILITIES
- The Privacy Shield strengthens protection of personal data that is transferred from a Privacy Shield organization to a third party agent, requiring a Privacy Shield organization to:
- take reasonable and appropriate steps to ensure that the agent effectively processes the personal information transferred in a manner consistent with the organization’s obligations under the Principles;
- upon notice, take reasonable and appropriate steps to stop and remediate unauthorized processing; and provide a summary or a representative copy of the relevant privacy provisions of its contract with that agent to the Department upon request;
CLARIFICATION OF LIABILITIES
- The Privacy Shield organization (the data importer) is responsible for the processing of personal information it receives under the Privacy Shield and subsequently transfers to a third party acting as an agent on its behalf.
- The Privacy Shield organization remains liable under the Principles if its agent processes such personal information in a manner inconsistent with the Principles, unless the organization proves that it is not responsible for the event giving rise to the damage;
- The Privacy Shield also clarifies that Privacy Shield organizations must limit personal information to the information that is relevant for the purposes of processing;
ANNUAL CERTIFICATION REQUIREMENTS
- The Privacy Shield requires an organization to annually certify with the US Department of Commerce its commitment to apply the Principles to information it received while it participated in the Privacy Shield if it leaves the Privacy Shield and chooses to keep such data;
- It also requires that an independent recourse mechanism be provided at no cost to the individual;
STRONG EXPECTATIONS TO RESPOND PROMPTLY TO REQUESTS
- The Privacy Shield requires organizations and their selected independent recourse mechanisms to respond promptly to inquiries and requests by the Department for information relating to the Privacy Shield;
- The Privacy Shield also requires organizations to respond expeditiously to complaints regarding compliance with the Principles referred by EU Member State authorities through the Department;
- It further requires a Privacy Shield organization to make public any relevant Privacy Shield-related sections of any compliance or assessment report submitted to the FTC if it becomes subject to an FTC or court order based on non-compliance.
MORE FLEXIBLE RETENTION PERIODS FOR RESEARCH AND STATISTICAL ANALYSIS
- The Privacy Shield hasn’t forgotten about offering a differentiated regime for research, as organizations may retain personal information for the time and to the extent such processing reasonably serves the purposes of archiving in the public interest, journalism, literature and art, scientific or historical research, and statistical analysis.
THAT’S GREAT, BUT WHAT’S THE ADVICE FOR MARKET RESEARCHERS?
Clearly the adoption of a new Privacy Shield offers a much more “user-friendly” mechanism to re-enable data transfers between the EU and the US in the same way that before the Safe Harbour scheme allowed more than 4000 companies to transfer data easily between the two data markets. Market, opinion, and social researchers also benefited from this scheme as leading agencies were using it but also many suppliers into the industry.
The alternatives, like binding corporate rules and standard contractual model clauses provided by the Commission can be cumbersome or worse and sometimes simply cannot be applied to the use-case. So having this scheme can be a relief.
There is, nonetheless, a word of caution to be placed on rushing to adopt the Privacy Shield. As highlighted by the European Parliament’s rapporteur on the General Data Protection Regulation, Jan Albrecht, there are many who think the new Privacy Shield will not pass muster in front of the courts and Privacy Advocates will be rushing to challenge the new decision.
There is therefore a real risk that in the not too distant future, the Privacy Shield may be struck down like its predecessor by the Court of Justice. Companies should think carefully about whether they wish to invest the time and resources to certify under the new scheme in light of this uncertainty.
In light of the new developments, our advice to our members can be summarised as follows and is consistent with the advice we have been providing since the Court of Justice decision namely:
- Conducting an audit of any data transfers susceptible to journey via the US is crucial to determine your exposure to the Court of Justice ruling that personal data transfers to the US under the Safe Harbor scheme is illegal.
- Updating your privacy policies to highlight the existence of these data transfers, if you haven’t already, is a crucial step. The aim should be to notify as clearly as possible what data is transferred to the US, to underline the conditions under which that data is travelling, and the risks involved. It’s important that this is understood to be an indication of goodwill and shouldn’t be mistaken as a compliance measure by the organisation.
- Seeking alternatives to transfers to the US remains a useful step to consider as all transfer schemes currently in existence have proven subject to potential legal challenges.
- Where possible, partner with European-based services to execute your data processing tasks involving Europeans’ personal data as this will reduce exposure to legal problems stemming from inequivalent levels of protections that you may encounter resulting from the use of a non-EU partner. Anonymised data is not subject to restrictions and therefore it may be wise to process the data in Europe, and then send it to US entities as anonymised data sets.
- If this is not possible or practical, then the alternative mechanisms like binding corporate rules, standard model contractual clauses, and the Privacy Shield (Starting 1 August 2016) must be in place before personal data transfers from the EU to the US can take place. If you’ve already adopted one of the other alternative mechanisms it makes no sense to return to the Privacy Shield.
- If you intend to use the Privacy Shield, we recommend that partners you use for data processing be subject to an annual audit of their Privacy Shield certification along with meeting the requirements referenced above.
- It may also be useful to consider adding a safeguard clause into your contracts which allows you to require your partner to work with you to find alternatives should the Privacy Shield be subject to a new legal challenge, and should an alternative not exist, allow the termination of the partnership without any additional fees.
WE’RE HERE TO HELP YOU!
ESOMAR members may feel the need to reach out to determine whether the Privacy Shield is the right mechanism for them. The document itself can be quite daunting! That’s why ESOMAR’s Professional Standards service operates a free queries service for members which can help assist members in their reflections. Members can get in touch with our services at firstname.lastname@example.org. So if you have any questions don’t hesitate to get in touch.
Kim Leonard Smouter is Head of Public Affairs & Professional Standards at ESOMAR.