As we wind down for Christmas, we look back at our most read articles of 2018. With a wide variety of articles from NPS, to protective techniques, to culture, to agile methods, 2018 has been a year of variety on Research World Connect!
By Kathy Frankovic
It’s fair to say that the American election polls in 2018 were quite successful. Newspaper headlines and stories after the midterm elections for the most part praised them. However, their tone was sometimes one of surprise and shock, reflecting the long-term impact of the criticism that followed the 2016 presidential election.
That year, the U.S. pre-election polls averaged a three-point lead for Hillary Clinton in the national popular vote, and she did win the popular vote by just over two percentage points. While that would elsewhere have been seen as a success, the American electoral system selects a winner through the Electoral College, where votes are allocated based on the number of Senators and Representatives each state has in Congress. In most states, the popular vote winner takes all that state’s electoral votes. By winning Michigan, Wisconsin and Pennsylvania by a combined total of 79,000 votes, Donald Trump won a majority in the Electoral College. This state by state counting received less attention in 2016 than it should have (after all, as recently as 2000 the candidate who won the national popular vote also lost in the Electoral College).
The national popular vote for the House of Representatives means little, as seats are allocated district by district. But “generic ballot” national polls are common, and this year indicated that Democrats would have a clear lead in votes cast nationally. They did, winning in the national House vote count by eight percentage points. But as in Presidential elections, capturing a majority of the vote overall doesn’t mean that you will win enough seats (just as it may or may not give you a victory in the Electoral College). Democrats won 1.4 million more House votes nationally than Republicans did in 2012, but still would up with 33 fewer seats.
The media reporting of the accuracy of pre-election polls after the election came with caveats. With so many races to poll, there were bound to be errors. New York Magazine noted in its headline that “The Polls Were Fine, If Not Perfect.” The Washington Post asked if it was “Another Bad Night for Political Polls?” And then answered the question: “Not Really.” State polls still leant towards Democrats, but only by 0.4%, much better than in 2016.
Midterm elections must be seen as a collection of many races – the 35 Senate elections and the 435 House races. So national polls are not enough. This year, however, there was an even greater focus on individual House races, particularly those that were likely to be close, or were viewed as having the potential to change sides. There were several creative attempts to deal with the large number of such races, using a combination of new and older methodologies. The old ways of doing things are definitely under challenge.
The New York Times’ Upshot paired with Siena College to conduct polls in dozens of competitive House districts. It used voter lists provided by a vendor, instead of making telephone calls using random digit dialing. It sampled within Congressional districts, making adjustments based on the availability of telephone numbers for subgroups, relying on outside information for data not available on voter lists, like education. It then created turnout estimates. This is difficult in the U.S., as voting is not compulsory and can be very low in non-presidential years. This year, the usual low turnout was expected to rise from the 36.7% of the vote eligible population that voted in the 2014 midterm election. The turnout rate jumped twelve percentage points, as nearly half the eligible population turned out in 2018, the highest in more than 50 years. The methodology is reported here and here.
Since The Times and Siena College polled in what were expected to be competitive districts, the polls basically showed election that were very close – within the sampling error – in nearly all of them. The Times decided to show results in real time as each interview was completed. This is an example. While the poll was being conducted, red and blue dots appeared in the location about an hour after an interview was completed. As is the case with most telephone polls, the vast majority of calls do not result in an interview. So watching the polling “live” could be a slow and lengthy process, not necessarily an exciting one. This was viewed as a way of making election polling more transparent to the public.
CBS News partnered with YouGov for its Battleground Tracker, using YouGov’s online panel, with oversamples in contested districts. But it also used information about voters throughout the country to improve the estimates in the contested districts, CBS News and YouGov were able to make better estimates of the final House outcomes. The questionnaires were somewhat longer than those used by the Upshot, and the Battleground Tracker conducted its polling online, not through telephone calls. Both of these approaches required a very large number of interviews.
The final Battleground estimate was 225 seats for Democrats and 210 for Republicans. With a large margin of error (plus or minus 13 seats on each number), the final estimate fell within the final outcome of what appears to be 235 Democratic seats (some races are still not officially settled).
This year, even the traditional exit poll had a challenger. The exit poll, invented in the 1970’s, has changed as American have changed how they cast ballots. With a growing share of the vote cast before election day (through absentee and early voting), Edison Media Research, which has conducted the media exit poll for more than a decade, now supplements traditional exit polling at precincts with pre-election telephone polls and polls at physical early voting locations. This year, the Associated Press partnered with the National Opinion Research Center (NORC) and Fox News to expand the reach of election day polling, creating the APVoteCast: 40,000 pre-election interviews using registration-based sampling (which The New York Times Upshot also used), 6,000 interviews with the NORC AmeriSpeaks probability-based online panel, and more than 90,000 interviews with non-probability online panelists. With approximately 60 different questions, the AP VoteCast would not only tell who had won, but provided issue and demographic information.
Overall, the election polls of 2018 generally did well, but some pollsters appear to have decided to use the concerns of 2016 as a starting point to develop new methods of understanding election behavior.
A surprisingly mild mid-November morning in Dublin welcomed the start of ESOMAR Fusion as delegates from over 40 countries descended upon the Irish capital to understand how as an industry, we can marry up the learnings from qualitative and big better to create a better tomorrow.
For someone that is new to the world of market research and the intricacies of big data and qualitative research, the event was perfect to really get under the skin of the topics, network and improve my general ignorance.
Before I get started on the content, one of the best things about FUSION from my perspective was the organisation of the event. Many conferences I have been to provide a 10-minute break for a coffee then straight back in. This wasn’t the case at FUSION, through all four days, delegates had ample time to network, chat and discuss the content they had just seen/ heard. Also, any conference that provides a hot lunch every day is winning in my eyes.
For many Big Data is still a topic that can bring out many into a cold sweat, as an industry it feels like we are still getting to grips with these big data sets, and how to get the best out of them. So, I was interested to discover what the future of big data fused with qualitative research looks like.
FUSION, however, set out it’s stall early. All of the speakers over the first two days provided real world analogies that really brought the data to life. One of the best examples of using big data to solve a real-world problem (first world) that stands out from the first two days for me was the Demystifying Machine Learning session with Sjoerd Koornstra of The House of Insight, and Wim Hamaekers of haystack International on how to use big data to pick a drink flavour in a specific market. This talk caused a lot of debate amongst the audience with many asking why not just use qual research. The response was simple – cost and time.
There were plenty of other examples of how big data is transforming our world, whether that be the “dreaded Blockchain” topic, in which Clint Taylor at RDM provided an extremely accessible way of explaining it, to Hans the clever horse, who was able to understand body language to solve complex mathematical problems. Along with Jonathan Mall of Neuroflash with his presentation (one of the highlights of the entire week), which demonstrated how by using big data, brands can understand the sentiment and resonance of each word on their website, to drive greater customer interaction.
After two days of APIs, coding and heavy tech, it was time to hear about qual. It was very interesting to see the distinctive change in styles of presentations and the way in which the qual researchers presented their information and papers.
The two days of qual papers called for a lot more audience participation with many of the sessions requiring the audience to split into small workshop groups to solve specific problems. One of which is how can a US cinema chain win the battle against streaming films at home and what research would need to be done in order to solve this challenge.
My group, which luckily for me consisted of many qual researchers decided to take a different approach to many of the other groups within the audience. Many felt that by fusing passive data such as social listening with qualitative research was the way forward (embracing the theme of the conference). We however, took a different path identifying how US cinema chain could collaborate with wider partners to use existing data to entice customers back, and marry this up by using qual to see which collaborations and partnerships consumers really wanted at their local cinema. All the points made for a really strong discussion in the networking break that followed the session.
One of the main highlights personally was the range and breadth of approaches that researchers can use to get qualitative insights. Shell for example, decided to pay homage to James Corden’s Carpool Karaoke as a way of getting to understand their customers better. While alcohol brand Suze proved that not all hipsters truly are unique, and many have the same opinions when it comes to individuality, a result that really shocked those within the panel.
So, after four days of presentations, debates and a couple too many Guinness’ with a few of the delegates – it was time to head back to London. The biggest thing that struck in my mind when sitting on the flight was actually how important qual is to big data and big data is to qual. Both are sides of the same coin. Researchers are starting to get to grips with the fusing of these two data points, but there is still a way to go before the potential is fully realised.
A look at how TV drama has become more socially conscious and what this holds for the future.
Over the years, ESOMAR has become a strong supporter of young professionals and their ideas. To encourage industry involvement and sharing, the Young ESOMAR Society (or YES!) was launched and with it, the YES Award. At this year’s Congress in Berlin we were thrilled to have 5 brilliant finalists from around the globe, all as first-time speakers and most as first-time delegates – thanks again to Research Now SSI for sponsoring their attendance! They pitched their best and brightest. They did an amazing job. And one decided to write about their experience. As YES Coordinator, and organizer of the pitch competition, we are happy to introduce Emily Ozer, from System1 Research, on her adventure at ESOMAR Congress…
This year, I was given the wonderful opportunity to attend my first industry conference – the ESOMAR Congress 2018. Now that those three jam packed, inspiring days have come to an end, it’s time to reflect and share what I consider the best things about attending ESOMAR Congress.
A NEW POINT OF VIEW
One of the things I loved most about this conference was how it opened my eyes to so many different perspectives. ESOMAR attendees are not only from all different corners of the research world – quantitative, qualitative, vendor side, client side – but from all different corners of the globe. Every day, you’re being taught about new innovative methods of gaining insights from consumers, while also learning about the unique contexts and climates in which other researchers work, and the different challenges each market faces. You’re hearing directly from clients about how their companies search for insights and use them to improve their business. As someone who has only ever been on the vendor side, it was fascinating to hear the ways our work is woven into the decision making of companies that affect the lives of consumers every day.
Another thing to love about ESOMAR! There’s something so exhilarating about attending a talk that captures your attention and makes you think, and actually having the opportunity to seek that person out later on in the day, tell them how much you enjoyed it, and engage them in discussion. It’s just not something you get from TED talks and webinars online. I admit that, as a first time attendee and someone relatively new to the industry, I was too nervous to approach many of the speakers I wanted to chat with, but I certainly won’t be letting nerves get in the way next time! It’s too big of an opportunity to pass up when you have so many brilliant people all in one room at your disposal.
No reflection on ESOMAR is complete without a mention of the party! Who would have thought market researchers could be such a wild bunch! 🙂 But in all seriousness, the party is a great place to let your guard down and get to know the people behind the research in a more casual setting. No sales pitches, no exchanging of business cards, just genuine connection over a beer.
I left ESOMAR Congress with an overwhelming feeling of togetherness and closeness with the insights community. This feeling that, even though so many of us in attendance were competitors, we’re all working together towards this larger goal of understanding humanity a little bit better. That may be the best thing about ESOMAR. It’s about learning from others’ brilliance and letting your eyes be opened to new ways of thinking. It’s not about one-upping each other, about competing for clients’ attention, or even about winning new business. It’s simply about education and knowledge sharing.
This list only begins to scratch the surface of all the ways attending a conference like ESOMAR Congress 2018 can bring value and meaning to your career. I’m thankful for having had this opportunity to attend, and can only hope I’ll be back this time next year sharing my experiences from Congress 2019! See you in Edinburgh!
Thanks to the ESOMAR team for putting together a wonderful program and pulling off a spectacular Congress! Special thanks to Danika and the YES team for believing in my pitch and giving me a platform to share, and to System1 Research for all the support.
Sound like fun? Feel like being a part of the community? Or know someone who would? Be sure to check out the YES website and get yourself and/or colleagues 35 and under involved! Speaking, competing, programme committees, becoming published…it’s all possible with YES!