It’s a cold wet morning here in Amsterdam as we kick off the ESOMAR 3D Digital Dimensions 2012 conference. ESOMAR’s annual 3D event takes on the ever growing juggernaut that is online research taking in online panels, social media and mobile research. So take your seats, get your twitter out and ready yourself for a day of data collection, social media and online communities.

In what’s quickly becoming a regular feature of ESOMAR events, we begin with the boost session to clear those heads. So this morning is kicked off by a percussionist. This sessions are usually as much of a surprise to me as they are to our delegates, so I have no idea what the instrument that’s being played. But I’m certainly enjoying the melodic Final Fantasy vibes he’s kicking out as the delegates file in for the first session. After a quick introduction to the instrument, turns out it’s a Hang Drum, and a quick boundary breaking clapping exercise to get the delegates up and brains working we’re off.

We start with short opening from ESOMAR Vice President Mike Cooke and introduction by the Committee Chair Martin Oxley who reminds of us, if we need reminding, of the speed of change that technology provides and how adaptability is key to the future of market research.

Our opening keynote on this first day is Laura Chaibi, Director of Research, EMEA at Yahoo! in UK who today is talking about Media Research in the Eye of the Storm. Laura delivers a fittingly mile-a-minute presentation looking at the use of big and real time data at Yahoo! Laura started with some pretty staggering figures to illustrate the scale of data available. With over 50,000 emails sent this second (only 20% of those are actual human to human communication, the rest made up of spam, subscriptions and Facebook notifications). But Laura was here to talk about the role of the researcher in a company were the amount of data being collected in real time is huge. Within Yahoo, researchers are consultants to other teams rather than driving product development. But one huge shift that she has seen at Yahoo! and should be driving the big data revolution is automation. If you have colleagues that spend their days looking at data that could be automated you are already behind. Real time automation is the key to big data.

So tech companies have the opportunity to collect a huge amount of data but where does this leave the researcher? Well according to Laura researchers are still needed for insight. To tell the stories of what the consumer is doing and what they are saying. The data can’t tell you why, can’t tell you how the consumer feels. That’s were researchers need to position themselves. Laura went on to talk a bit about humanising search data. By fusing other sources of data into the search data, Yahoo! was able to give colour to consumer data allowing for more accurate onine touch point. Multi-platform data is another necessity when dealing with big data.

It was at this point that Laura came back to what seemed to be 2 of the keys points of the presentation. Researchers understanding their limits, acting as consultants rather than product developers and the need of automation. How can researchers that work with a sample of 160 people compete with programs with a pool of 1.6 million people and can measure 1,000s of ad variations in real time. But were researchers need to position themselves is as long form story tellers. With the rise in branded content researchers are still needed to turn data into insights and provide companies with the how and why of consumers.

SETTING THE SCENE
Up next we had Alice Louw and Jan Hofmeyr from TNS, with Reality Check in the Digital Age. Jan asked the question, what if web surveys are doomed to suffer the same cycle as fax machines? Innovative for a short time and then displaced by better technology. Jan was here to talk about mobile surveys and to address what are the ubiquitous questions about survey size and the adaptation of the survey to the emerging platform. He started looking at the issues with current surveys:

  • Stop asking questions that are invalid
  • Measure what’s relevant to a respondent – Surely it’s possible for us to be able to gather the dominate driver of purchase intention in 140 characters?
  • Fix obvious and stupid modelling errors – human beings are comparative animals but we ignore it in surveys in almost everything we do.

Alice then presented some of the results of a recent study TNS had carried out to try and combat the same issues that seem to have plagued the online survey for what seems to be forever. Alice wanted to talk about fundamentals. Technology has the ability to take research to another level, but bad research is still bad research. The fundamentals are important. So with references to Jan’s points Alice talked about the importance in the framing of questions, adding share measures, whilst making questions more relevant to the consumer.

For many of us we’ve heard this all before. I’m not sure there’s been a market research conference in the last two years that’s dealt with mobile research and not had a presentation on the problems with the current survey methods and the need for change on the new platform. I wonder at what point the industry won’t need someone to stand on stage trying to ring in those changes.

FIRST DIMENSION: ONLINE PANELS AND DATA COLLECTION
After a very short break we were back in the conference room with the second thread of the morning. First Dimension: Online Panels and Data Collection. Up first was Steve Gittelman of SampleSolutions in the US with his paper Rules of Engagement. Like mobile research survey lengths, online panel respondent engagement has been a pretty standard subject at conferences for some time and though we’ve all heard the theory on scaling down survey length and other apparent fixes. But an unengaged respondent, who speeds through a survey without thinking can lead to poor data and the researcher providing poor insights to customers. Steve was here to talk about identifying those poorly engaged respondents and filtering them out to ensure better data. We went on to call it an age old questions, but how do we decide who stays and who goes? We need a system of laws to decide it.

Steve went on to present a recent study SampleSolutions carried out to try and combat unengaged respondents. Across 40 studies they created a series of engagement metrics, the metrics were nothing new and all variables used were standard in the research industry. These included rare items test, speeding, trap questions, inconstancies and straight-lining. By using these metrics and filtering out respondents Steve felt the data they were providing their clients to be more accurate. Though he warned, it’s important not to take out too many respondents.

Steve went on to assure us that we can change. We need to reinvent ourselves, we can no longer just pass data over to a client. We need to get into the business of correcting data we know is bad. We can do it and we can do it simply. We need to make sure we feel totally confident in your data, and taking those first steps does not mean complicated modelling, we can all do it.

In the second presentation of the thread we had Piet Hein van Dam of local firm Wakoopa, who was presenting his methodological paper on online behaviour measurement, Better Answers to Basic Questions. Piet was here to demonstrate a new way to measure online behaviour. He starting looking at some of the existing methods and their issues. For example the Google Analytics unique visitor figures count unique combinations of browsers and devices, so individuals can be counted as unique several times depending on how they are accessing a site. While a lot of web behaviour data measured at the server-side is cookie based, counting clicks and reconstructing data. But cookies are not people, how do you measure the individual? Wakoopa have moved to user-side measurement, using apps downloaded directly to the user device. This provides the opportunity to filter out over engaged respondents and household PCs (where single users are needed.

But how accurate is Wakoopa’s behavioural data? Well not as accurate as they hoped compared to industry standard usage reports. Piet went on to say the raw data may be more accurate than the corrected, filtered data. Even with user side there is still a great possible margin of error.  What if we don’t filter out persons but only the odd parts of behavioural data such as sites that pay people to click on online adds (Wakoopa found that 50 – 75% panelists visit at least one affiliate site). The data that Wakoopa is collecting is imperfect, but it is a work in progress. Piet stressed the need for the tech companies to forge deeper relationships with researchers to try and best measure online behaviour.

In the last session before lunch Jorn Schulz of Telekim Innovation Laboratories talked to us about some ethnographic work carried out for Wikipedia in his paper Writing Wikipedia, in which he studied the community of Wikipedia writers and editors, or as they’re known, Wikipedians. In referencing the previous presentations on big data, Jorn talked about the big data available from Wikipedia, but this data was not able to answer the questions Jorn needed to ask. He needed to take a qualitative view of these Wikipedians. He went on to say ethnography was a far more useful technique to find out what motivates Wikipedians. In a previous study using standard psychological terms, Fun was cited as the biggest reason for writing Wikipedia posts. But what does that mean? And in going deeper, how do you dive into a culture that is based primarily online.

Ethnography was the answer for Jorn, but could and should it be done online only? For a proper ethnographic study it’s not enough to just keep it online. The exclusion of the real world means you will loose some context and relationship data. While he contacted people online, via email and Skype, Jorn went further and used offline traditional face-to-face visits it gave him the chance to collect holistic data that expands the results and provides richer insight. Jorn was able to develop deeper profiles of Wikipedians, including those with altruistic views, those who used it commercially and those who saw it more as a social network, a way of meeting people.

Jorn concluded that exclusive online research gives only half the picture, if you want to move really close to your customers you need offline and online to get the full picture. It reveals inner connections and relationships. It needs to connect the real and virtual world, as they are not mutually exclusive.

Share: