Asking good questions is an art. But getting answers you can trust is all science.
Science rests on getting the same answer when you ask the same question. If you ask the same question of three different but identical samples and you get three different answers, what have you learned? You’ve learned you have a problem.
Unfortunately, lack of reliability is a major concern in the insight world these days. In the old days we had fraudsters, click farms and bored people, all vying to earn a couple of quarters per survey. That was bad. But now with Artificial Intelligence (AI) driven bots joining the sample exchange party, it’s much, much worse—it’s a free-for-all. And data reliability is suffering drastically.
Fortunately, you can protect yourself by asking a few basic questions of any potential sample provider:
Where did the people in the sample come from?
What is the evidence of the reliability of data from this sample source?
What is being done to detect and eliminate AI-driven bots and other bad actors?
Where do they all come from?
The source of the people taking your survey is the first question because it is foundational. Get the wrong people and you’ll have a misleading answer.
Is the sample recruited from people who are part of a loyalty card scheme—and therefore biased in where and how they shop? Or is the sample coming from unknown people who didn’t qualify for other surveys and got bounced to yours via a sample exchange? Or are they even people at all? Did someone send bots to your survey so that they could scoop up the honoraria?
The more you know about a respondent and how they were recruited into a panel, the more certain you can be that they are a real person whose answers you can count on. Ask hard questions the sample source and don’t settle for half answers. The quality of your work hangs in the balance.
At the Angus Reid Group (ARG) we run carefully targeted advertising to attract people to join the Angus Reid Forum. They are carefully profiled and invited to take a series of engagement surveys—which we scrutinize carefully—before they are accepted as a full-fledged member of the community.
Show me the evidence
Ask potential suppliers for evidence of the reliability of their sample. Ask them to share data that shows that, if you keep asking the same question, you keep getting the same answer.
At the Angus Reid Group we run omnibus surveys on an ongoing basis. We rotate through a battery of simple questions over time so that we end up with repeat measures for each question. You can see from this graph that the answers are consistent—with any variation being within what you’d expect with nationally representative samples of 1500 adult Canadians. Those straight lines are boring, but they are boringly reliable.
Ask any potential supplier to provide similar evidence of reliability. If they don’t have it, ask yourself why not?
As the old proverb says: “Trust, but verify.”
Battling bots
Speeders and straight liners—people who answer the same way over and over—are easy enough to detect, as are people who answer open-ended questions with “asdfg” and “yuio.” But AI-driven bots have changed the game. Their patterns of answering need not be so simplistic, and that makes them hard to identify.
Ask potential suppliers what tools they are using to detect bots, and how they are changing as bots evolve too.
At ARG we’ve developed tools that employ advanced pattern detection algorithms to identify bots, as well as knowledge traps that bots can’t identify, hidden questions that bots can “see” but humans can’t, time stamps to identify speeders, analysis of open ends that includes AI-driven lookups for duplications and web-based sources, tools to identify duplicate IP addresses, and an ever-evolving set of approaches that allow us to take a multi-dimensional look at whose answers are to be trusted, and whose are to be deleted.
Research rests on reproducibility
The quality of your sample is what determines the usefulness of the research you do. With so much hanging in the balance, you can’t afford not to ask potential sample providers these three questions:
Where did the people in the sample come from?
What is the evidence of the reliability of data from this sample source?
What is being done to detect and eliminate AI-driven bots and other bad actors?
With good answers to these inquiries, you can be sure the answers to your questions will be reliable too. Trust, but verify.
Recommended
Unlocking Consumer Decisions – How Implicit Response Testing Reveals What Traditional Surveys Miss
Understanding why people make decisions has always been at the heart of market research. Yet traditional surveys often miss the mark, asking people to explain choices that they may not fully understand themselves. Decisions are often automatic and subconscious, shaped...
Why Brands Are Embracing Insight Communities and What to Look For
In today’s competitive market, brands are constantly looking for smarter, faster ways to understand what makes their customers tick. Insight communities have emerged as a powerful solution, offering companies a unique, ongoing connection with their audience. These...
Fraud and AI in Market Research: Key Takeaways from the 2024 GRIT Report
The 2024 GRIT Business & Innovation Report reveals two major trends shaping market research: the importance of fraud detection and the expanding role of AI. At Angus Reid Group (ARG), we’re tackling these issues head-on. Our proprietary Survey Sentinel tool, an...
Can you trust your data in 2024?
Honorariums attract fraud because people will cheat for even small amounts of money, especially if they can do it on a large scale. Straight lining, speeding and poor-quality open-ended answers were once the telltale traces of fraud. These human-driven behaviors are...