Simon Chadwick

Simon Chadwick talks to Mario Callegaro, a survey research scientist at Google

Simon: Mario, tell me a little bit about your background and what brought you to Google and what your role is at Google.
Mario: I started my career, in my undergraduate years, in sociology at the University of Trento, Italy, and I was always interested in the survey area. I did my Master’s at the University of Nebraska-Lincoln, followed by a PhD in survey research there.

Once I finished, Knowledge Networks, which is now GfK, was looking for a survey methodologist, and I guess it was the right time for me and for them. After almost two years, I went to work for Google. I was already living in Mountain View, where Google is headquartered. I knew the company, but not the survey work they were doing.

Simon: Going through Knowledge Networks – what a great place to work. They had a great reputation for excellent research.
Mario: Absolutely, I learned so much from Knowledge Networks! It was a great place to work. Going to Google, what really surprised me is the scale and the volume of the surveys that are done, and the international scope. It’s typical to do English and possibly Spanish for surveys in the U.S.  But when you work for an international company, some surveys are translated into 40 languages. That’s really challenging, and the numbers, the volume, is really high.

Simon: So it sounds as if market research and survey research actually play a fairly central role at Google.
Mario: They do. My role was to help establishing a single global design for customer satisfaction at Google and work with different teams in order to accomplish that. Goals around customer satisfaction, for example, are established at an executive level and tracking studies provide constant monitoring of such goals.

Like many companies, we conduct market research to learn more about how consumers use and feel about our products. For example, we might conduct a survey to find out which product features users like better, or which features are aware of.

In my specific case my team is actually called Quantitative Marketing.  Many of my colleagues have PhDs in statistics. Another colleague and I are survey scientists. We work with our statisticians in order to provide them with the best data that can be later used in performing advanced analytics, for example, and in developing weights and bias removal strategies.

Many other teams conduct surveys as well, including usability team and HR.  It is nice to work with other teams with different goals and bring survey evidence-based knowledge. We also do market research surveys where you don’t survey Google users, but look to understand potential users or clients for new products – more traditional market research, where we contract external vendors.

Simon: Those external vendors – do they tend to be just data-collection vendors, or do you use full-service research companies as well?Mario: It depends. On my team, we tend to use only the data collection side of a market research company, for example, their call center or online panel. We do everything from designing the questionnaire to the data analysis and reporting. We can bring information that we already have on respondents into the mix, so we don’t need to re-ask all the same questions – which is very powerful. It also shortens the questionnaire. We typically cannot share this type of data with vendors. Of course, the respondent has consented to share this information with us. We ask for explicit consent, and it allows us to link this information to understand how they use the product and to connect the behavior with their attitudes.  But other teams at times use full-services, where the company is also giving you insights, analytics and reports.

Simon: Right. So a lot of your role, therefore, is synthesising all those different information points about the customers.
Mario: Yeah, that’s a good way to put it.

Simon: What is the definition of market research at Google? We’re hearing a lot, right across the markets and across the world, about a wider definition – one that might include things like social media, web analytics, big data and so on. Is that more how research is viewed at Google?
Mario: There are so many teams at Google who use research in different ways, so it’s different per team. In my team’s case, we can be both innovative and traditional. I spent my first two years at Google doing online surveys of our small advertisers and spent my last two years doing telephone surveys of our top advertisers. This is something that might seem surprising. For that specific target of respondents it was a great way to get a really high response rate, especially for C-level respondents, whom are few by definition. They are difficult to catch in a web survey with an e-mail invitation. This is one of the problems we are facing in the industry –e-mail invitations seem to be less and less effective in eliciting response. The volume of e-mails that are received, especially in a business environment, is so high that your e-mail invitation might get lost during the day. For example we know that, if people are going to respond, they answer within the first 12 hours; after that there is a sharp drop in response. We use best practices for telephone surveys, where we send a traditional envelope with a letter signed by our country lead – which actually surprises some of our clients.

Simon: Right. I think a lot of people would be surprised that Google does a lot on the phone and, indeed, uses snail mail to legitimise and invite its respondents. A lot of people would be actually rather pleased to hear that.
Mario: I think so. I mean, the first time I was talking to somebody – they were having some issues with response rates – and I said, “Have you tried switching mode and doing telephone surveys?” – they looked at me like I was crazy. But two years later, we are doing it – and very successfully for a specific section of clients. We need to remember that every methodology has strengths and limitations as well. We need to understand which is the best method for the target population. Let me give you an example: open-end questions – many respondents don’t answer them online. That’s not the case in telephone surveys where you get in depth answers to open-end questions.

We also use mixed-mode research where we combine telephone and web for some sections of respondents.

Telephone surveys are a lot of work. You need to have a good vendor who is able to scale a survey to many countries at the same time. They should be able to manage the mailing, which is easier said than done. The mail piece determines the entire call pattern, so you need to make sure that you send the letter in advance, before you start calling, and you cannot wait too long or people forget about the letter. We also do ask a sub-sample of respondents if they remember receiving the letter, which is quality control for the vendor and us.

Simon: Listening to you talk about quality control and survey methods is very encouraging, because clearly there’s a great deal of thought and effort that goes into this. From your experience on the supplier’s side of the business, would you say that this is unusual, or do you feel that this is relatively normal?
Mario: All told, some of the vendors just look at, say, cost, and in our case, luckily, we focus more on quality than cost. When you have large projects, cutting corners and reducing costs just diminishes the value of the survey, so I prefer to do one less survey and have an overall higher quality.

That’s actually a big debate I have with many teams I consult with at Google – they are considering to collect data on a quarterly basis, or even on a monthly basis, and my first question is, “Is the key metric(s) you are measuring going to change in a month or in a quarter?” If not, maybe you don’t need to run a survey with that frequency. I also ask, “Are you going to do something to change this metric?” If you don’t do anything, if you don’t change anything in your process, you’re just measuring some variance due to sampling and potentially non-response. In other words we push teams to make the results actionable so we can measure some change in the next iteration.

Coming from a survey research background, I am always very sensitive to respondent burden. We need to respect the fact that we have respondents investing time to answer our surveys, and we really need to use the data the best we can.  Actually, sometimes it’s much better to re-analyze the data than to re-do a survey. But that’s a bigger discussion, maybe outside the scope of this conversation.

Simon: But it’s a very relevant one. Many people in the industry have been battling with the idea for a long time. So you talked about surveys and you talked about synthesis of data from different sources. Do you use or make use of some of the newer modalities of research, for example online communities or ethnography or digital qualitative?
Mario: Personally, not much, just because of time. I wish I could have a second life just for doing qualitative. We generally start our survey projects with qualitative interviews and in-depth interviews, and that’s what we suggest to everybody. As you can imagine, at Google everything is very new and keeps changing, so you cannot just sit down and write a questionnaire. That’s not going to work. Consequently we talk to a small group of people from your target sample to see whether they understand our language. Are we writing questions that make sense or that are just too technical? We need to make really sure that our respondents understand the questions.

Then the pretest phase is really crucial. Given the nature of our products pretesting and soft-launching a survey is key. I invest a lot of time in talking and negotiating with vendors a pretest phase and the metrics to keep track of in order to spot potential problems. To start, interview median length, then issues with particular questions that in web surveys can be detected using paradata, and in a telephone interview by reading the interviewers’ reports and conducting debriefing after the first day of calls. For example, when we launched a wide multilingual survey I spent two days with my colleagues in the call center listening to interviews.

You can be smart and do quality control along with data collection without slowing down everything – and still get high-quality data.

See part 2 of this interview in the October 2013 issue of Research World

Simon Chadwick is Managing Partner at Cambiar and Editor-in-Chief for Research World and Mario Callegaro is Survey Research Scientist at Google in the UK