Pete Cape and Efrain Ribeiro 

The use of routers is widespread. Most online access panel companies utilise them, as do many research buyers of online samples. But what is a router?

It is useful to take a step back and understand what problem routers are designed to solve. We may fondly imagine that most of the time a respondent has with us is spent answering surveys, but sadly it is spent either not qualifying or being out-of-quota for surveys. This is not a great experience for someone who, right then, was ready, willing and able to take a survey. Nor is it a very efficient use of the relatively short time a panellist spends with a panel company. It is understandable, then, that one of the most often-stated reasons for leaving the respondent pool is the absence of surveys to do.

And yet panel companies, just like research companies, are running many surveys concurrently.

A solution then to the efficient allocation of respondents to surveys is to find out first, which among the many surveys available the respondent is most likely to qualify for, and direct them to it. This, in essence, is what all routers do; they take a willing respondent and allocate them to a survey they are likely to be able to complete.

From an efficiency and respondent-engagement perspective, the router makes an enormous amount of sense. Viewed from a sampling-rigour or methodological standpoint, the outlook is less rosy. If you consider all the people currently “in” the router at any given time as being the sample frame, then the way in which they are being allocated to surveys is biased. If a person is doing survey X based on some behaviour or attitude, then they can’t be also doing surveys A, B or C (surveys for which they perhaps also qualified). These other surveys are biased by the absence of people who have the behaviour or attitude required by survey X.

Thus we have a trade-off between efficiency/engagement and methodology. This trade-off is different depending on how the router is set up and operated. At one extreme, a router might randomly allocate people to surveys. This would bring no efficiency/engagement benefits. At the other extreme, the router might run in a fixed priority order. The further down the priority list a survey is, the more biased its sample becomes (but this is the most efficient option). Other tweaks to the router design – how the screening questions are asked, how many surveys can be completed per sitting, etc. – all make a difference to the extent of the bias.

As with all biases, we need to consider not only its size but also its effect: does the bias affect the survey outcome, and how can it be mitigated? This vital question is the subject of much current industry research. Until we get those answers, the only thing we can do is ask questions of our sample and fieldwork suppliers around how their router is set up and how they deal with, or measure, the bias it brings.

 

What to tell the client

Pete: When using routers what should you tell your client?

Efrain: As a supplier of online sample and one of the largest buyers of online interviews in the industry, I believe we should have greater transparency on the sample selection process, including whether a router is used in the process and how that router is managed for our specific work and on an ongoing basis. We at LSR have made this type of information mandatory in agreements with our suppliers. This makes it easier to trouble-shoot any data-results issues that may arise. We have learned from the past that it is important to have consistent online controls on our samples, and this requires us to know how our suppliers are selecting that sample.

P: Transparency is important, but then you might be giving information to people who have no means of assessing what it means for their project in terms of potential bias. I’m not saying it’s precisely like the list of possible side-effects you see on headache remedies – scary but unlikely – but it’s close. Until the industry comes to some agreement on whether or not the theoretical bias you get from routers is a reality – and under what circumstances – then we are all a bit in the dark.

E: Additional information and education is required, but at a base level a client ought to know if a router is being used.

P: I agree. As you say, it gives you a start point for asking questions about data issues should you get them. It’s not necessarily where I would start looking for answers, but that’s a different conversation. You know so much of what we do in research involves a trade-off. With routers you’re trading off biases. You get this potential coverage error, and you get rid of self-selection bias, which is a real problem. The potential positives are enormous.

E: Yes, we often think that automated processes are best, and this is probably a case where a well-trained, independent human hand on the wheel is almost certainly a good idea. There are two things at play: one is the trade-off between efficiency and methodology, and the other is in individual project managers trying to push their project forward, naturally at the expense of others.

P: Sounds like some basic router education is required, and lots of questions need to be asked, then.

Pete Cape is global knowledge director at SSI and Efrain Ribeiro is chief research officer at Lightspeed Research.

Download ESOMAR’s 28 Questions to Help Buyers of Online Samples, updated to include routers and other topics and Google to find the 12 companies that have posted their answers to the questions on their sites.

Share: