By Jan Willem Knibbe
There is no denying that data, and personal data even more, holds huge value. It is power that drives the internet, and some there is a whole ecosystem of companies trading in personal data. This has inevitable lead to the question, who owns these the data?
By Jan Willem Knibbe
Facial recognition is ubiquitous and new applications are emerging on almost a weekly scale. Just think about Apples’ animojis, tagging people on Facebook or Windows 10 Hello to login to your computer without a password, not to mention applications that are used by security professional to identify people in a crowd.
As we wind down for Christmas, we look back at our most read articles of 2018. With a wide variety of articles from NPS, to protective techniques, to culture, to agile methods, 2018 has been a year of variety on Research World Connect!
By Kathy Frankovic
It’s fair to say that the American election polls in 2018 were quite successful. Newspaper headlines and stories after the midterm elections for the most part praised them. However, their tone was sometimes one of surprise and shock, reflecting the long-term impact of the criticism that followed the 2016 presidential election.
That year, the U.S. pre-election polls averaged a three-point lead for Hillary Clinton in the national popular vote, and she did win the popular vote by just over two percentage points. While that would elsewhere have been seen as a success, the American electoral system selects a winner through the Electoral College, where votes are allocated based on the number of Senators and Representatives each state has in Congress. In most states, the popular vote winner takes all that state’s electoral votes. By winning Michigan, Wisconsin and Pennsylvania by a combined total of 79,000 votes, Donald Trump won a majority in the Electoral College. This state by state counting received less attention in 2016 than it should have (after all, as recently as 2000 the candidate who won the national popular vote also lost in the Electoral College).
The national popular vote for the House of Representatives means little, as seats are allocated district by district. But “generic ballot” national polls are common, and this year indicated that Democrats would have a clear lead in votes cast nationally. They did, winning in the national House vote count by eight percentage points. But as in Presidential elections, capturing a majority of the vote overall doesn’t mean that you will win enough seats (just as it may or may not give you a victory in the Electoral College). Democrats won 1.4 million more House votes nationally than Republicans did in 2012, but still would up with 33 fewer seats.
The media reporting of the accuracy of pre-election polls after the election came with caveats. With so many races to poll, there were bound to be errors. New York Magazine noted in its headline that “The Polls Were Fine, If Not Perfect.” The Washington Post asked if it was “Another Bad Night for Political Polls?” And then answered the question: “Not Really.” State polls still leant towards Democrats, but only by 0.4%, much better than in 2016.
Midterm elections must be seen as a collection of many races – the 35 Senate elections and the 435 House races. So national polls are not enough. This year, however, there was an even greater focus on individual House races, particularly those that were likely to be close, or were viewed as having the potential to change sides. There were several creative attempts to deal with the large number of such races, using a combination of new and older methodologies. The old ways of doing things are definitely under challenge.
The New York Times’ Upshot paired with Siena College to conduct polls in dozens of competitive House districts. It used voter lists provided by a vendor, instead of making telephone calls using random digit dialing. It sampled within Congressional districts, making adjustments based on the availability of telephone numbers for subgroups, relying on outside information for data not available on voter lists, like education. It then created turnout estimates. This is difficult in the U.S., as voting is not compulsory and can be very low in non-presidential years. This year, the usual low turnout was expected to rise from the 36.7% of the vote eligible population that voted in the 2014 midterm election. The turnout rate jumped twelve percentage points, as nearly half the eligible population turned out in 2018, the highest in more than 50 years. The methodology is reported here and here.
Since The Times and Siena College polled in what were expected to be competitive districts, the polls basically showed election that were very close – within the sampling error – in nearly all of them. The Times decided to show results in real time as each interview was completed. This is an example. While the poll was being conducted, red and blue dots appeared in the location about an hour after an interview was completed. As is the case with most telephone polls, the vast majority of calls do not result in an interview. So watching the polling “live” could be a slow and lengthy process, not necessarily an exciting one. This was viewed as a way of making election polling more transparent to the public.
CBS News partnered with YouGov for its Battleground Tracker, using YouGov’s online panel, with oversamples in contested districts. But it also used information about voters throughout the country to improve the estimates in the contested districts, CBS News and YouGov were able to make better estimates of the final House outcomes. The questionnaires were somewhat longer than those used by the Upshot, and the Battleground Tracker conducted its polling online, not through telephone calls. Both of these approaches required a very large number of interviews.
The final Battleground estimate was 225 seats for Democrats and 210 for Republicans. With a large margin of error (plus or minus 13 seats on each number), the final estimate fell within the final outcome of what appears to be 235 Democratic seats (some races are still not officially settled).
This year, even the traditional exit poll had a challenger. The exit poll, invented in the 1970’s, has changed as American have changed how they cast ballots. With a growing share of the vote cast before election day (through absentee and early voting), Edison Media Research, which has conducted the media exit poll for more than a decade, now supplements traditional exit polling at precincts with pre-election telephone polls and polls at physical early voting locations. This year, the Associated Press partnered with the National Opinion Research Center (NORC) and Fox News to expand the reach of election day polling, creating the APVoteCast: 40,000 pre-election interviews using registration-based sampling (which The New York Times Upshot also used), 6,000 interviews with the NORC AmeriSpeaks probability-based online panel, and more than 90,000 interviews with non-probability online panelists. With approximately 60 different questions, the AP VoteCast would not only tell who had won, but provided issue and demographic information.
Overall, the election polls of 2018 generally did well, but some pollsters appear to have decided to use the concerns of 2016 as a starting point to develop new methods of understanding election behavior.
A surprisingly mild mid-November morning in Dublin welcomed the start of ESOMAR Fusion as delegates from over 40 countries descended upon the Irish capital to understand how as an industry, we can marry up the learnings from qualitative and big better to create a better tomorrow.
For someone that is new to the world of market research and the intricacies of big data and qualitative research, the event was perfect to really get under the skin of the topics, network and improve my general ignorance.
Before I get started on the content, one of the best things about FUSION from my perspective was the organisation of the event. Many conferences I have been to provide a 10-minute break for a coffee then straight back in. This wasn’t the case at FUSION, through all four days, delegates had ample time to network, chat and discuss the content they had just seen/ heard. Also, any conference that provides a hot lunch every day is winning in my eyes.
For many Big Data is still a topic that can bring out many into a cold sweat, as an industry it feels like we are still getting to grips with these big data sets, and how to get the best out of them. So, I was interested to discover what the future of big data fused with qualitative research looks like.
FUSION, however, set out it’s stall early. All of the speakers over the first two days provided real world analogies that really brought the data to life. One of the best examples of using big data to solve a real-world problem (first world) that stands out from the first two days for me was the Demystifying Machine Learning session with Sjoerd Koornstra of The House of Insight, and Wim Hamaekers of haystack International on how to use big data to pick a drink flavour in a specific market. This talk caused a lot of debate amongst the audience with many asking why not just use qual research. The response was simple – cost and time.
There were plenty of other examples of how big data is transforming our world, whether that be the “dreaded Blockchain” topic, in which Clint Taylor at RDM provided an extremely accessible way of explaining it, to Hans the clever horse, who was able to understand body language to solve complex mathematical problems. Along with Jonathan Mall of Neuroflash with his presentation (one of the highlights of the entire week), which demonstrated how by using big data, brands can understand the sentiment and resonance of each word on their website, to drive greater customer interaction.
After two days of APIs, coding and heavy tech, it was time to hear about qual. It was very interesting to see the distinctive change in styles of presentations and the way in which the qual researchers presented their information and papers.
The two days of qual papers called for a lot more audience participation with many of the sessions requiring the audience to split into small workshop groups to solve specific problems. One of which is how can a US cinema chain win the battle against streaming films at home and what research would need to be done in order to solve this challenge.
My group, which luckily for me consisted of many qual researchers decided to take a different approach to many of the other groups within the audience. Many felt that by fusing passive data such as social listening with qualitative research was the way forward (embracing the theme of the conference). We however, took a different path identifying how US cinema chain could collaborate with wider partners to use existing data to entice customers back, and marry this up by using qual to see which collaborations and partnerships consumers really wanted at their local cinema. All the points made for a really strong discussion in the networking break that followed the session.
One of the main highlights personally was the range and breadth of approaches that researchers can use to get qualitative insights. Shell for example, decided to pay homage to James Corden’s Carpool Karaoke as a way of getting to understand their customers better. While alcohol brand Suze proved that not all hipsters truly are unique, and many have the same opinions when it comes to individuality, a result that really shocked those within the panel.
So, after four days of presentations, debates and a couple too many Guinness’ with a few of the delegates – it was time to head back to London. The biggest thing that struck in my mind when sitting on the flight was actually how important qual is to big data and big data is to qual. Both are sides of the same coin. Researchers are starting to get to grips with the fusing of these two data points, but there is still a way to go before the potential is fully realised.