About Our Company
We are a successful player in the market research industry and pride ourselves on providing the best research data quality in the industry. We are committed to discovering important and valuable insights that will improve existing products and create new products that consumers will love. Although we offer no training, as well as almost no support or benefits, your skills and experience help to create the one of a kind products and services that consumers love.
This is an exciting role that provides support to the marketing and marketing research teams. Self-motivated people with above-average reading comprehension and written skills will enjoy contributing to this team.
The key responsibilities of this position are many. You will:
- Read and respond to surveys on a wide variety of exciting topics such as insurance, car tires, carpeting, paper products, and socks
- Have fully formed and logical opinions on every product, service, issue, or person that you have and have never considered before
- Expertly understand marketing speak but not have any education, training, or experience in marketing, sales, business, advertising, or media
- Have an extraordinary ability to remain focused even when you don’t understand the question, answers are missing, and you’re on minute 49 of a short survey
- Skillfully interpret and respond to the intent of a question as opposed to what is actually written
- Choose the answers that best reflect your true feelings, even when none of the answers come close to reflecting your reality
- Avoid giving the same answer to every question in a grid even though the best answer is legitimately the same answer
- Recognize all personal, cultural, emotional, and response biases and ensure they never affect your opinions
- Pour your heart and soul into sharing your opinions without wanting to see the results of your efforts
- Appreciate the gift of time, often amassing to several hours, when you are screened out of 7 surveys in a row
To be successful in this role, you will have the following skills and attributes
- No experience answering surveys such as restaurant or hotel review cards, bi-annual employee satisfaction surveys, and online reviews like Amazon or TripAdvisor
- Extensive experience using internet browsers such as Chrome, Firefox, Internet Explorer, Safari
- Must supply own computer with mouse and keyboard, tablet, or smartphone, as well as access to high-speed internet, electricity, office (at all times, at least one device must be less than two years old)
- Must be able to self-diagnose and solve all types of device and software issues such as frozen screens, screens that don’t scroll, buttons that won’t click, pages that won’t turn
- Superior typing skills using physical and virtual keyboards ranging in size from 2 inches to 15 inches
- Superior written communication skills to precisely describe feelings and opinions in words that can be reliably assessed by text analytics software
- Prior experience answering surveys would be advantageous
- Be a 27 year old black, Hispanic man with a graduate degree, 2 or more children, and earn $200 000 or more annually.
- Be able to ignore all family responsibilities such as making sure dinner isn’t burning, folding laundry before it becomes permanently wrinkled, taking Lego out of the baby’s mouth, or driving kids to the calculus tutor during the survey taking process
- Be able to perform at 100% mental capacity for at least 60 consecutive minutes
- Have no hobbies or interests that could be misconstrued as more important or interesting than this position
- Have hours of free time every day during which you couldn’t be volunteering at a hospital, working at a minimum wage job, teaching your grandchild to read, or playing Minecraft
About the Benefits
- Opportunity to be associated with a successful company
- No salary
- Token non-monetary gifts
- Fully flexible hours
- Work from home
- Training not provided
- Support provided as time and priorities permit
If this sounds like you, we’d love to hear from you!
By Annie Pettit, Chief Research Officer, Peanut Labs and Vice President of Research Standards, Research Now
By Jeremy Caplin
The relationship between advertisers & their marcomms suppliers (Creative, Media, PR) has traditionally been viewed as one of master/servant. The asymmetric nature of this situation can itself be a major cause of issues and dysfunction. The supplier will be reluctant to point out to a Client its shortcomings, even if they can have a material impact on the quality of service the supplier can provide.
The client, on the other hand, may not be aware of or open to addressing its weak points. In fact, the usual solution to issues with an agency is to simply fire it and hire a replacement. But if the client’s way of operating has not changed, isn’t it likely that the same issues will re-occur with the new agency?
So how can businesses get the maximum value from their external suppliers? And how can they avoid unnecessary supplier turnover if and when communication does break down or deep-rooted issues occur?
This is where not only research but the insights and actions from the resulting data can help. Regular evaluation of each other by both sides which are benchmarked against a large set of external data can drive meaningful performance improvement. Data (below) from over 13,850 relationship evaluations shows that both Client & Agency performance can be consistently improved over a 12 year period.
Further analysis of these Client-Agency relationship evaluations demonstrates with a staggering 99.9999% statistical confidence the co-dependence of Client and Agency performance. Put another way, it provides the quantitative proof to Ogilvy’s now 50 year old qualitative dictum “clients get the advertising they deserve”.
The data goes further to show that a Client performing at the highest level assesses its Creative Agency’s output to be 37% better than that poorer performing clients obtain. And for Media Planning there is a striking +27% differential. These are huge opportunities to maximise ROI from the $bn’s of precious marcomms funds invested through and with agency partners.
So, not only does this highlight the power of and need for a quality use of insightful, benchmarked and actionable research in this area, it begs the obvious question – what about the Research Industry and its role as supplier to major organisations? Would it not also benefit by opening itself up to kind of the scrutiny that other marcomms suppliers are doing with their clients? Would they not be able to service better their clients if they could understand the extent to which they are addressing their client’s needs and, equally importantly, have a structured & proven vehicle to feed back to those clients what they need from them to be able to service them to the highest levels?
After all, better work ultimately leads to better client retention. It also drives supplier reputation and the satisfaction generated makes talent retention just that little bit easier.
Wouldn’t it be ironic if the most notably absent marcomms supplier from the process of using research as a tool to improve the quality and value of its service, is the research sector itself?
About the Author
Jeremy Caplin is the CEO of Aprais Ltd, with experience of both agency and client side management. Jeremy has held various positions at data and analytics house dunnhumby and senior marketing roles at Nestlé, Reckitt Benckiser, P&G and monster.com.
By Kevin Gray and Koen Pauwels
“In spite of our strong marketing support, sales of our brand are flagging. Why? What should we do?”
“If we launch this new product, what will it do to our bottom line? Will we just cannibalize our flagship brand?”
These are just two examples of questions marketers around the world ask themselves every day. Unfortunately, there are rarely simple answers and organizational politics and other factors, such as the state of the economy, also come into play, further complicating matters. While some marketing researchers seem to take for granted that marketing is now well-embedded in most companies and that the value of marketing research is universally accepted by marketers themselves, even in Western multinationals these assumptions are tenuous. In the words of one of our contacts, a marketer with extensive brand management experience at MNCs, “Marketing is regarded as fluff” even at many large corporations. The perception that the real work is done by production, sales and engineering is very common.
On the whole managers, marketers included, seem unprepared about how to fully leverage either data or analytics in decision-making.1 Many decisions continue to be made based on gut instinct and internal politics, even when sophisticated analytics and Big Data are part of the decision-making process. Though not wishing to resurrect Taylorism2, we feel decisions can be made more scientifically and more effectively through the appropriate use of data and analytics and, more fundamentally, by thinking like a scientist.
Thinking like a scientist isn’t just matrix algebra and programming – these are important tools for some participants in the decision-making process but are means and not ends. Thinking like a scientist is a way of looking at the world that helps us tie disparate data and information together to make better decisions in a timely fashion. One not need not have elaborate statistical skills in order to think scientifically – most scientists have actually had minimal academic coursework in statistics.3
The first steps are to examine our assumptions and, in a nutshell, to do our homework. Here are a few basic questions we would encourage decision makers to ask themselves:
- What decisions do we really have to make? Why do we think these are the decisions we must make?
- How much of what we “know” about our product category is actually mere guesswork? What do we really know about the competition? Is the competition really the competition? Perhaps our definitions of our category (and core consumers) are too narrow.
- Is it time to revisit our SWOT analyses (Strengths/Weaknesses/Opportunities/Threats)?
- What relevant data do we have and how reliable is it? What data can we obtain that might fill in important blanks?
- When do we really have to make our decision? A decision made too slowly is a bad decision, but a bad decision made hastily is not a good one either.
Thinking like a scientist can help us better judge whether a decision will have the desired consequences and can also bring to light choices that we had not considered.
Dashboards under assorted names are now a dime-a-dozen but the utility of many of them is uncertain. KPIs are religiously tracked but many may have no empirical relationship with the bottom line. They are assumed to be connected with sales, market share and profitability, for example, but this assumption might never have been rigorously tested, and some KPIs may only be legacy items with no real business meaning. Chapter 8 of ‘It’s not the size of the data – it’s how you use it’ explains how to connect KPIs to your bottom line, and to drop most so-called KPIs because they are not leading indicators of hard performance.
There are many traps that managers can easily fall prey to when trying to unravel the mysteries of the marketplace:
- Presumed causes may, in fact, be effects. For instance, we may observe a huge spike in our paid search clicks together with a spike in online sales and fully attribute the sales increase to the success of our paid search. However, these customers may have already decided to buy from us thanks to other incentives, and simply use search as a lazy way to get to our site. We usually do not have experimental evidence on which to base our decisions and even experiments are never 100% conclusive.
- There may be important variables we haven’t considered. According to London Business School professor Tim Ambler, there should be a KPI for every likely cause of success or failure. Moreover it is important to cover the main stakeholders.
- We frequently cross tab or plot variables two at a time but this does not account for factors that might mediate or moderate their relationship, which may really be weaker or stronger than the tabulation or graph suggests. The classic – macabre – example is that psychiatrist visits increase people’s suicide risk – this relation holds up, but switches from positive to negative once the third variable (depression) is accounted for.
- In time-series data there are often lagged relationships among variables and one that might seem irrelevant may actually have a long-term impact on sales or some other key measure. This effect could be large or small, beneficial or harmful. For instance, increases in brand consideration and liking often lead to long-term brand benefits, even after the competition reacts.
- There may be an genuine relationship between a marketing input and sales but we may not spot it because the relationship between the two is non-linear or obscured by other variables that we have not measured or modelled.
- We may be confusing a fluke with a trend. The more we seek, the more we will find. We should look at the overall patterns in the data and not just focus on one or two variables.
- Last but not least, be wary of confirmation bias – it’s quite natural to search for, interpret or recall information in a way that confirms our beliefs!
Unless the variables that are truly relevant are statistically identified and tied together, dashboards and other decision support tools may be misleading or at best a waste of money. In extreme cases, we would be better off tracking random numbers generated by a spreadsheet; this certainly would be faster and incur little cost!4 (It is important to recognize, though, that statistical models are simplified representations of reality, not actual reality, and that math can never entirely replace the gut in management decisions.)
Humans are strongly inclined to think dichotomously (e.g., something is either good or bad) even though thinking in terms of conditional probabilities is usually a better reflection of the way the world works.5 We should also be frank and admit that data and analytics are often used to support decisions that have already been made and that we especially love numbers match our view of the world! Furthermore, it’s often quite easy to put forth a seemly good “explanation” about why something has happened after the fact but being able to actually predict future events is another matter all together.
Data and analytics have been hyped to the point where many of us are getting sick and tired of hearing about them, and there is also a lot of disagreement about what they mean. In reality, we feel they are still greatly underutilized by managers. This is very unfortunate since we are now at a point in time in which many organizations now have more data and better analytic tools than ever to enhance decision making. However, we should stress that it’s the thought process that’s most important and, by following some of the guidelines we’ve outlined, managers can make better decisions even with limited data and mathematical tools. It’s truly the thought that counts.
1 In the past many marketing research agencies were mainly field and tab companies, often with an Operations department headed by a “Programmer/Statistician.” This person was in charge of fieldwork and data tabulations. Perhaps because of this, marketing researchers to this day often think of analytics as cross tabs or programming. It’s also conflated with ‘Big Data’.
3 See Statistics Done Wrong (Reinhart), for example.
4 Structural Equation Modelling and Time Series Analysis, while highly technical, offer very useful conceptual frameworks for thinking about these issues. The Halo Effect (Rosenzweig), The Improbability Principle (Hand), Risk Assessment and Decision Analysis (Fenton and Neil) and It’s Not The Size Of The Data – It’s How You Use It (Pauwels) are four books that also address these concerns.
5 Instead of “Will this work?”, for instance, “If we assume A, B and C, what is the likelihood of D?” may be the more useful question in many circumstances.
Kevin Gray is president of Cannon Gray, a marketing science and analytics consultancy. Koen Pauwels is professor of marketing at Ozyegin University, Istanbul.
By Laura Finnemore
I recently came across Allan Fromen’s article ‘When Will Market Research Get Serious About Sample Quality’ and found myself wholeheartedly empathising. As a quantitative executive who specialises in online methodologies, I have come across my fair share of suspicious looking data when we’ve used panel sample – the ‘jokers’ who could compromise the results we deliver to our clients. In fact, in a recent study conducted by McCallum Layton via a UK based panel provider, a shocking 352 completed interviews out of a total base of 2000 had to be removed and replaced due to various quality control issues, which included:
- ‘speedsters’ – those people who complete the survey far too quickly for their answers to be truly considered
- ‘flatliners’ – those who repeatedly give the same answer
- nonsense verbatims – random letters, or responses that don’t answer the question
- contradictions in responses – respondent A says he has a son, but then later in the survey, the son magically disappears
- offensive language – I’m all for passionate responses, but when the respondent has simply filled the space with swear words, they have to go!
Bearing this in mind, we really owe it to our respondents to provide them with engaging and stimulating surveys to make sure they don’t get bored. But when the average panellist is on 5-6 panels, and receiving many invites per week, it’s difficult to make our surveys truly stand out.
Most issues come from real life respondents, but one of the most worrying trends for me is the growing sophistication of automated programs, designed to ‘cheat’ our carefully constructed questionnaires. Whilst checking the data on a different survey, we found 30 completes that seemed to draw on a standard set of around 8 verbatim responses – the phrasing, punctuation, spacing and spelling mistakes were identical, and couldn’t have come from unrelated ‘real life’ respondents. More worryingly, these verbatims all referenced the topic of the questionnaire, so wouldn’t necessarily be detectable to the untrained eye. When we approached the panel company to report this, they said the IDs in question came from 30 completely different IP addresses, and they simply couldn’t have uncovered these fraudulent responses using their own initial checks. Once some retrospective digging was done, the perpetrators were found, but the panel provider wouldn’t have been aware if we hadn’t flagged it.
Interestingly, when the same survey was relaunched over a year later, we spotted the same bank of 8 verbatims being called upon again. Having just completed the 4th wave of the research, it’s still an issue and despite changing panel provider, we have to remain vigilant to this kind of activity.
So I think it falls to us – the researchers and analysts – to give detailed feedback to our panel partners to root out the people who are consistently providing us with unreliable data. Speaking to others in the industry, I’m not sure that the process of checking data quality is deemed to be as important as the analysis and reporting stages. If everyone contributes to this effort, we can help to drive sample quality to the top of the agenda. And if these fraudsters are proving elusive, we need to (at the very least) replace these interviews so our clients are always getting the best possible quality of data.
Laura Finnemore is a Senior Research Executive at McCallum Layton
By Tim Macer
Does market research have an innovation problem? And, if so, what can it do about it? These were questions I asked of four leading thinkers and practitioners in innovation: a major buyer of research, a provider of consumer insight away from conventional market research, a market researcher who advises companies on innovation and an ex-research-buyer-turned MR technology innovator. There is remarkable consistency in the views they express. The industry’s caution is not necessarily seen as a negative – but how market research fares as technology drives innovation is where views start to differ.
Andrew Geoghegan is global head consumer planning at Diageo. As the world’s largest premium alcoholic beverages company – and the name behind major brands such as Smirnoff, Johnnie Walker and Guinness – Diageo is a significant research buyer. Though Geoghegan started out as a researcher in a big research company, his career has subsequently revolved around innovation in consumer products and, now, a senior role in consumer marketing.
Compared with other sectors, Geoghegan considers market research not to be known for innovation. He says: “I think it has been slow to incorporate new thinking from psychology and neuroscience, and it’s probably slower than other segments in embracing and really facing into changes in technology.”
He contrasts this with the consumer-focused sector in which he now works. “We are very aware of how technology is affecting the way our consumers interface with our brands and our media,” he says, which has led to fundamental changes in how the business operates. “I think the paradigms around research collection, design and analysis are pretty fixed. There is some innovation, but it tends to be in pockets, and it is iterative rather than transformational.”
He says the big research customers are “crying out” for innovation. “I would love for the research industry to partner with clients like us [to innovate], because there is a loss of faith in a lot of the methods out there. You hear people say research is about risk management rather than inspiring better thinking and better ideas.”
Asked if research clients are discouraging innovation with their focus on “quicker and cheaper”, Geoghegan responds: “The research industry needs to understand the world we are living in. Agencies would be naïve to think we are not looking for innovation in efficiency and quality, but I think its an ‘and/and’ conversation.”
With consumer brands, he says innovation is often achieved through “premiumisation” which creates new opportunities. It also forces increased commoditisation in the core products, which drives innovation around efficiency. He contends that the commoditisation is taking place in research without the accompanying premiumisation and that agencies are failing to realise that “money is always found” if a company presents something that can bring new insights as well as efficiency, such as “something that provides quality research in a responsible way – because responsibility is, as you would expect from an alcohol company, our mantra.”
Geoghegan exhorts research companies to practice what they preach, get closer to their customers and take a more holistic, less project-driven view. He thinks the focus on process can act as a barrier, and the only conversations he has with many research providers at his level are when things go wrong.
“It’s a small handful of agencies that seeks to have a longer term strategic relationship with me,” he reports. “Innovation is all about cultural change, and empowering people to see that change is a core part of what they are there to do – rather than seeing themselves as people who are part of a process.”
Stan Knoops is global head of insight International Flavors & Fragrances Inc., a major provider of fragrances used in consumer products, and another user of research. His background is in food science and consumer behaviour, and his career within companies in the fmcg sector has largely been about bringing consumers into the product innovation cycle. As an occasional buyer of market research, he is equally critical of market research’s track record on innovation.
“For me, innovation is really important, and that is how we differentiate ourselves. I do not think there is a culture of innovation in the market research sector. It is, in my opinion, very traditional. There are some pockets of innovation, but in the last 20 years, maybe the biggest things that made an impact are communities and mobile technology.”
If you’re an ESOMAR member you can read the full article in MyESOMAR in the digital copy of Research World. If you are not a member of ESOMAR you can join and receive a free copy of Research World 6 times a year or alternatively you can sign up for a subscription of the magazine in our publications store.