When some people think of a "professional" panelist, they think of someone who is able to make a living by participating—exclusively—in research studies. I generally find that research incentives amount to a reasonable hourly rate for the research itself.
For example, a pretty typical 14-day, 3-part Diary mission on dscout will compensate $50 and ask for an average of 10-15 minutes of participation every few days. We’ll call it a total of 90 minutes of participation over the 2 weeks—or an hourly rate of about $34. However, I’d have to participate in at least ten similar studies over just that 2-week period to make a thousand dollars a month. Just keeping track of them all would be a full-time job.
Say we take a less literal approach to thinking of a “professional” panelist—instead defining it as panelists who participate a lot. In this case, researchers may believe, because they participate so frequently, they are somehow tainted; their data is biased, their motives are impure, their responses are too polished to be authentic. Let’s dive a little deeper into these concerns.
“They’re giving the right answer, not their real answer.”
The primary concern I hear about professional panelists from my clients is that they “know” the “right” answers—or they “know how to get accepted into studies.” This is probably the most founded concern. Yes, in my experience, a lot of research questions are written in a way where a participant can identify the patterns, solve the puzzles, and “guess” the right answer—but that’s not a participant problem, that’s a research problem.
Almost anyone in the world can guess the “right” answer to a "yes-or-no" question if given the right context clues—whether or not they’ve encountered the question before. We’re made to try to understand where the other party is coming from and puzzle our way to common ground. Yes, your research participants are trying to figure out what you’re asking from them (and yes, they have baser motivations—like incentive payouts—too). A person isn't inherently bad, untrustworthy, or suspicious because they are hoping to be selected and want to answer a question more or less "correctly." And its scouts who excel at empathy, theory of mind, problem-solving, that are most likely to be good “guessers,” here.
If you are someone who wants recruitment to be as natural as possible, and are still leveraging remote research tools to do it, there are some ways forward.
Specifically, write your recruitment questions so they can’t be solved. And where you can’t, validate. Here’s an example:
If you’re looking for folks who’s teenagers use a certain brand of deodorant, it’s probably tempting to screen for folks with teenage children, and then jump straight into asking what brand of deodorant they buy for their teen. Instead, it’s safest to start by ensuring that their teens wear deodorant! (Okay, hopefully you can assume that -- but maybe not.) You’d also want to check your assumption that the parent buys, or knows what type of deodorant, their teen uses.
Advantage of “professionals” #1: They enjoy research
On the other hand, a “professional panelist” is someone who participates in research projects frequently, which implies, for them, it is worthwhile. They enjoy it. What a blessing! What an absolute treasure, a gem—someone who is doing this because they want to.
You’re trying to solve your business’s most important problems, and this person is willing to help you, even though it is not their job. Sure, they’re getting paid—but as we mentioned above, it’s not enough, on its own, to support them. Personally, I am so grateful for research participants. After my workday, all I want to do is turn my brain off—work out, read a book, watch TV, etc.
They’re willing to answer brain-busting questions about their experience, ideate new products, record themselves shopping at a grocery store—these are incredible requests, and it takes incredible people to do them. Participants who are confused, frustrated, or bored are not likely to provide high quality data; participants who are excited about providing their feedback are going to provide the best quality data. Building a coalition of committed participants to tap again and again for their responses is actually a foundational approach to cohort or privatel panel building. Connections are formed, the experience improves, which drives empathy between brand and user.
Advantage of “professionals” #2: They understand research
In addition, when you’re working with platforms like dscout or using methodologies like focus groups, there’s a lot of value to a participant who has some experience with the format, the methodology, the style of the research.
dscout participants learn how to record high-quality videos, submit high-class entries, and most importantly take feedback from researchers like you on the quality of their data. Green research participants are more likely to be frustrated by researchers asking probing questions or providing other feedback; they’ll also be nervous and more likely to provide feedback from a place of anxiety.
As a point of fact, albeit anecdotal, I find that “green” participants are much more likely to “tell you what you want to hear.” They think that’s what research is, or they want to make the moderator happy, or they just aren’t sure what to say—so they compliment a product, or provide tepid feedback.
More experienced participants have finally internalized that they’re there because their opinion is important. They’ve experienced probing questions before, and realize that researchers—for the most part—are really just curious about their experiences.
Advantage of “professionals” #3: They make your projects run smoother
Following a similar line, another concern I often hear with “professional panelists” is that they “know too much.” This time, the concern is their understanding of research logistics, competitive products/brands, etc. This, again, is actually a benefit. These participants know how to help you!
They can answer your questions actionably. You’re not going to need to ask five follow-ups just to get an answer to your first research question. And if you do need to ask a follow-up, they’re not going to get frustrated, or unexpectedly defensive, or tell you that you’re not paying them enough.
They know the beats of research—that they need to be screened into a study, that incentives can take some time to process, that researchers often aren’t the end client. All of these little background nuances that you wouldn’t think a participant should know about, but are actually really comforting for them to get to understand.
The concern folks often have with participants “knowing” all about competitive products, brand news, etc. can absolutely be valid. I’ll cover a few tactics for avoiding this situation, and screening smarter broadly, in the next section.
Recruiting "solutions" and considerations
I have experienced the deflation of proposing a “brand new idea!” to a research participant and hearing, “Oh yeah—I heard about that from [X company].” Of course, everyone’s nightmare is hearing something like, “I was talking about this idea for [Y company] at a focus group last week,” during a focus group conducted in front of [Z company] executives.
Although I still feel emphatically that experienced research participants tend to be higher-quality participants, I don’t advocate for allowing these types of situations to occur.
It is also totally fair to not want to hear from the same people every time you run research. I’m not making an argument for running your research with the same five individuals every time. That’s very different from not wanting to run research with people who participate in other research.
Luckily, there are easy ways to control for these scenarios—and to screen, in general, for the participants you want.
Screen behaviorally, screen specifically.
Write your recruiting requirements, and screening questions, behaviorally and with intention. Blindly disallowing broad swathes of “professional” participants won’t necessarily save you from the nightmare situation above. Instead of broadly disallowing folks who participate in research or have participated with some frequency, try asking questions like:
- “When was the last time you talked about the future of [product category]?”
- “Which of the following topics have you participated in research on within the past 3 months?”
Keeping your screener questions specific, careful, and behaviorally based gives you and participants a better experience.
Write questions that are harder to guess at
Of course, I haven’t said it outright yet, but hide your intentions. As we discussed above, it’s human nature for participants to answer with what they think you want to hear. Sometimes it’s for the incentives. Sometimes it’s because humans want to be “right,” relate, and feel part of a group. It’s crucial to write your questions so that every answer sounds right—even the open ends.
And wherever you can, validate. If your research/screening platform (like dscout) allows you to collect images and videos from participants—ask them to upload a picture of their [product that you’re recruiting based on] to prove that they have it.
Ask them to talk about their experiences as an underwater basket weaver in video format, so you can see their facial expressions and really feel them out. Failing that opportunity, even an open-ended text response gives you the opportunity to get a sense of how authentic they’re being.
Rethink how you label “professional” panelists
I propose that we stop calling these folks “professional panelists.” There’s too much baggage with this phrasing and it’s at best highly inaccurate. At dscout, we call these folks "super scouts." Ace participants! Even if I haven’t convinced you to be that empathic about their role in research, try calling them "experienced," instead of professional and see how that impacts the way you think about them.
Additionally, remember that humans are humans. Of course they want to tell you what they think you want to hear, and of course they want to make money, and of course they’re going to try to solve the “puzzle” that is your screener. This is human nature—go in knowing that.
Carefully consider how talk about participants (and bias) in your share-out
I hear so much about biased samples, and the bias that an experienced participant brings into a research study. If we think critically on it, every research sample is biased. In this world, we are not actually capable of not running biased research—every sample we’ll ever work with is necessarily a convenience sample.
Ethics demand it: you can’t run user research with participants who haven’t signed up to participate voluntarily. Because, of course, a truly random sample of your users would include folks who don’t genuinely want to participate—and are here with ulterior motives. It would include the folks you always purposely try to exclude, including the ones who use the product “wrong,” those whose relatives work in media, the ones who answer every question with “I won’t do what you tell me,” etc. If we’re honest with ourselves, we don’t truly want to conduct research with a completely random unbiased sample—we would not know how.
So whenever you report on user research, be open about the type of participants that supported it. If the research industry kept the “limitations” sections that academic research requires, I think we’d all be in a better—although perhaps more insecure—place.
Maybe your stakeholders already have concerns (some justified, some not) about qualitative research, so you might be reluctant to add new ones about “repeat participants” or “participant bias.” But the point of qualitative research isn’t to say, “X% of our users said this thing.” It’s to say, “Folks like this person think this way about this thing—use that to shape your thinking.” It’s to give stakeholders the context, the reasons, and the feelings.
Imagine the benefit that would come from having a frank conversation with your stakeholders about how all research is biased, but there are steps we can take to control the impact of that bias on our results.
I hope you're starting to reconsider the notion of "professional" panelists and how they are mostly a fantasy and—if we stop and consider it—a benefit for our studies more broadly.
Enlisting the help of folks who enjoy research can get us more interesting, more impactful, qualitative data. And with some careful consideration given screening, we can enjoy the benefits of their know-how without adding additional bias to our data.