Imagine this: you are a quantitative researcher, and have been writing questionnaires that include 5 point Likert scales for years now. One day, someone comes up to you and tells you that it has just been scientifically proven that participants don’t understand 5 point likert scales. That the whole system behind it, is flawed, and that you are no longer to use it in your projects, because a study among thousands of people has indicated its ineffectiveness.
Your reaction will probably range between “Show me the so-called ‘scientific’ research, because I don’t buy it!”, “what kind of people were in the sample of that research?” , “but we have been doing this for years and it has always worked” and “it might not be flawless, but it’s the best we can do”. Not a single one of you will react “okay, let’s kill it and try something else”. And rightfully so, right? We can’t just throw it all overboard because some people in some research answered in some kind of way, right?
Wait. That sounds familiar. It reminds me of the reaction of some of our clients when we present them research results… How often have you spoken the following phrases: “I’m sorry dear brand manager, but all concepts you have been working on for the past year have been rejected by consumers.” or “I’m sorry dear advertising agency, but the creative work you came up with is just not understood by consumers.”? Chances are: pretty often. It’s research reality. Including the negative client reactions, and very often the frustration at our side that accompanies these reactions. (“Why don’t they understand? They shouldn’t even do research if they’re not going to follow our recommendations!”)
Last week, this exact scenario happened to me. For once, I was at the client side of things. The beauty of working in my current position as research innovator is that you can invent things, and apply things from elsewhere in research. And so, as I am working on bringing gamification in different research methods, including research communities (MROCS), one of the game mechanics that I helped take shape is a leaderboard. Beautiful in its simplicity, the leaderboard tracks the number of good posts people do on a MROC, and displays these in a ranking. Participants with most posts on top, participants with least posts at the bottom. By simply showing people the ranking (without pushing them to be first), they will be incentivised to post more and better quality content, thereby making our community more successful. Or at least, that was the initial reasoning.
Now some colleagues of mine started a MROC on the MROC method, asking participants their feedback on all we do with them on MROCs. Some of the outcomes: quite shocking. They love gamification, but they hate the leaderboard. It’s an unfair mechanism to some (no matter how hard you try, there are always people better than you). It’s not valuable for others (I was on top of the board for 3 weeks, and I didn’t get an incentive?). It goes against the community idea for even others (It’s about doing things together here, not about being competitive!).
My reaction when my colleagues debriefed me about the research results was not a pretty sight, I’m afraid. I wanted to know who was in the sample, look through the data myself, and I knew for a fact that it worked in some previous MROCs I did myself!
Bottom line is: last week, I realised how clients must feel when Mr. or Mrs. market research consultant comes in and tells them what consumers want. It hurts to be wrong. It hurts to have to change your ideas. It hurts like hell to kill your darlings. So let’s have a little patience with our clients when we inflict that pain on them, shall we?