By Kyle Findlay
ESOMAR Big Data World took place in Brooklyn, New York at the end of November 2017. It was a relatively intimate affair as far as ESOMAR events go but it was filled with the right people.
The two days were grouped into sensible sessions including talks around thinking about big data (including my own), innovative applications (generally tied to familiar insights questions, just answered with new data sources), large dataset applications (lots of database merging, tagging and imputation going on here), mobile applications, programmatic and advertising applications, and more. It really did cover a large variety of “big data” areas, but things were always brought back to insights and marketing in a meaningful way. This wasn’t just geeking out on cool technology for technology’s sake (although there was some of that too).
Of particular relevance was the decidedly unsexy, but incredibly important, session on data privacy. The area is of vital importance given the imminent implementation of the EU’s General Data Protection Regulation (GDPR) which will affect companies around the world. Seldom in the past have data privacy issues been so front and centre, and attendees really got to grapple with the implications thanks to some particularly knowledgeable guests in this area.
The biggest ‘theme’ that stood out for me was around the fusion – or rather, natural convergence – of market research and big data. Rather than standing in stark opposition to each other as replacements, the conference ended up focusing on the ways in which traditional insight thinking is augmented and enhanced, rather than replaced, by more data and new data science techniques. In hindsight, one shouldn’t be surprised by this convergence of fields. “Big data” has always been a buzzword used to describe the gap between the kinds of data that our industry has traditionally leveraged and the new kinds that have become available in recent years. As market researchers, we stand uniquely poised to take advantage of new data sources. After all, data is our industry’s lifeblood. It’s just taken us a while to play catch up given our legacy approaches, and reliance on, traditional data sources. If nothing else, this conference proved that we’ve clearly moved past this temporary roadblock and into the new data dispensation, even if the future is not evenly spread across our industry just yet (to butcher sci-fi author, William Gibson’s, all-too-oft-quoted aphorism).
Coming from the data science world, it was great and humbling to see presenters leveraging advanced techniques, but only in the aid of insights which have always been the bread and butter of our industry. It’s easy to be intimidated by startups and technology players bandying about the latest and greatest technological breakthrough in the fields of artificial intelligence and machine learning, but it was much more relevant – and a good reminder of what’s important in our industry – seeing presenters casually reference word embeddings and LSTM neural networks in subordination to the client questions that they were used to answer. This reminds us that our industry always has been (and always will be) data- and methodology-driven but only to the extent that they help us surface actionable and impactful insights for our clients. The days of being intimidated by new data sources, technologies and methodologies are over. They are being democratised to the point where they just become more tools in a researcher’s toolbox. It behoves researchers to familiarise themselves with (and get excited about) the new tools at our disposal.
The conference left me feeling surprisingly optimistic and energised about our industry. In recent, years, market research conferences have been relatively dour affairs, with researchers concerned about the machines and tech companies coming to take our jobs. What this conference demonstrated was that the uncertainty over the place of surveys (or what I prefer referring to as “intentional questioning”) has dissipated to some degree as we now better understand the role they play in the new data dispensation. With this cognitive dissonance mitigated to some degree, researchers were able to get on with geeking out about the things that we love in the first place: data and insights; and, with big data, data science, artificial intelligence and similar topics covered during the conference, we spent much more time revelling in the possibilities (and the current reality as demonstrated by many great presenters), providing for an energising conference.
ESOMAR is our industry’s fairy godmother and this shows in the care and attention to detail that goes into organising each event. By sifting through paper submissions, they always ensure that quality is high. Given their interest in protecting and empowering our industry, conferences have sometimes been criticised for celebrating the status quo over the new and disruptive; however, such a criticism could not be leveraged at the organisation in recent times. They do a great job of pulling in new innovations while still keeping content firmly rooted in the insights paradigm. Thus, they ensure that our industry and professionals continue to evolve within the market research tradition rather than in parallel to it.
Kyle Findlay is Director: Global Data Science Team, Kantar Innovation
By Preriit Souda
ESOMAR hosted big data conference for the second time, this year in NYC between 27-29th November and was attended by around 100+ delegates from different parts of the world. The conference showcased innovating uses of bigger and newer datasets while some speakers showed innovative ways of how they were transforming the traditional research world. While I cannot talk about every presentation, I have picked out a few highlights; while it may not do justice to all, blame for it should lie totally on the editor who restricted my number of words for this blog; in short, don’t get angry at me if I missed something you liked!!!
The event started with an inspiring talk by Niels Schillewaert where he stressed the need for market researchers to believe in themselves and talked about the need for increasing use of new data sources for a better understanding of consumers. Kyle Findlay from Kantar followed Niels talk by stressing that it was no longer feasible for organizations to dance around the issue of technology adoption as it is core to the facilitation of all modern generation.
Mauricio Moura followed the initial talks with a very impactful presentation showing how big data enabled surveillance system called PEZ was used to monitor in near real-time local environmental factors favourable for the production of aedes mosquitoes (primary transmission vector of zika virus). I have often found that social cause projects challenge us in extraordinary ways but the joys of accomplishing results for greater good are much more satisfying than the pain and Mauricio’s presentation was a perfect example. Maurico’s presentation also had an important take away for practitioners – big data often begins with your own data. Furthermore, Carlos Bort and Carlos Ochoa Gomez from Netquest presented a very nice paper on their approach of overcoming user identification problems in passive online data collection; a must-read paper. Alex Ruiz from Viacom gave a 101 lecture on his experience in using LSTM (long short term neural networks- a type of recurrent neural net) in predicting TV viewership. Jonatan Hedin and Stephen Kirk from Universal Avenue (a B2B commerce platform) talked about their use of machine learning in the identification of relevant leads in B2B space- another insightful paper.
With the rollout of GDPR (General Data Protection Regulation) lined up in summer of 2018, privacy was another topic of concern. Finn Raben from ESOMAR briefed the delegates on how ESOMAR can help in the process of GDPR implementation while David Almy from Insights Association and guest speaker Jude Olinger talked about their experiences and perspectives on data privacy. Add to this, a team from Bain presented a paper on linking surveys with internal data while maintaining privacy. While the paper was a good workaround for simpler (comparatively) well-curated databases, I hope that in the coming months, several different organizations start developing techniques around data fusion for bigger complex data sources. I feel that all firms based in the EU will have to rethink a lot of data fusion techniques in the new privacy restrictive world that awaits in 2018/19 and ESOMAR’s effort of bringing this conversation to the forefront was a welcome move.
Morning of 29th started with Factworks and Facebook showcasing how local businesses and consumers interact with each other and how local shopping has evolved. Being the blogger, I can shamelessly write that I was also a speaker who spoke about some of the problems faced in analysis of large complex varied social + digital data sources and showed some of the solutions and tactics which helped us solve such problems in a quicker manner. Dmitry Gaiduk from CoolTool stressed the need to automate and pushed for acceptance of new techniques. Like every Thanksgiving lunch has a dessert (or atleast I think so), our dessert came in the form of a presentation from Edward Malthouse and Judy Franks in which they talked about understanding psychographics from TV viewing and using it for analyzing political voting behaviors; another interesting paper to read before you tread back to office in 2018.
While there are a lot of things that insight industry needs to do to catch up in the area of big data and AI (Artificial Intelligence), the commitment of ESOMAR in driving this change is very welcome. I also hope that in future events, organizers will try to include more startups because a lot of interesting mind-boggling innovations are taking place in that space. I also felt that just like at the ESOMAR Congress in September, it will be a nice idea to get young professionals to showcase their ideas and experiments in new data usage. Lastly, based on my experience, I feel that some technical organizations like IEEE, ACM etc can also be invited to such events to present their views on data generation, new technologies and others; which can inspire delegates and open new frontiers for both sides.
And that’s it. I think I have used up lot more ink than I intended to and so will have to stop now. Hasta la vista.
Preriit Souda, Data Science Director, KANTAR Analytics, UK
By John Kearon
Can it go further, and help us to better it? We look at an advertising campaign run by the ESOMAR Foundation using research insight to improve people’s lives. Armed with a generous gift of online banner advertising space from AOL’s own Foundation Oath for Good, here is what happened.
By John Pedersen
Norway is one of the smaller research markets in Europe, obviously connected to the fact that there are only 5 million Norwegians to survey. Despite this, Norway has a very active and large research community, organized in the Norwegian Market Research Association (NMF), dating back to 1970. Currently, NMF has 750 active members, with an even spread of agency staff and research buyers.
By Christoph Welter
This year at the Best of ESOMAR-Germany meetup, we measured the pulse of the industry at what seems to be one of the most hotly contested metrics these days: Trust! In four talks that scrutinized the topic from different perspectives, we talked about surveys and all manner of interview techniques, about participants, about clients and about the wider public – so how is research doing, and how can we regain some of the trust that we seem to be losing?