Obsessed with understanding what makes People tick

Evaluated Screeners Hero

5 Major Lessons from 1000+ Participant Screeners

Over the past 5 years, I’ve seen and worked on thousands of screeners. Here’s what I’ve learned.

Your research begins and ends with your participants.

Your participants begin (and sometimes end) at your screener questions.

So it follows: we should be putting as much thought and effort into our screeners as we do the rest of our study design.

And yet, as a member of dscout’s Customer Success team, I’ve seen countless screeners that could have been built with a bit more care.

Here are some mistakes I’ve seen crop up time and time again, and some tips for avoiding them:

1. Don’t telegraph what you’re looking for—but do validate responses.

We’ve all had instances where our research participants turn out to be less qualified than we’d expected. From time to time, we even get an outright liar.

You have the power to prevent this from happening.

Write your questions so that each answer choice sounds like it could be the “right” answer, and when it really matters, validate! For example, instead of asking:




Have you made an online purchase in the past 2 weeks?

- Yes
- No



Ask:




When did you last buy something online?

- Within the last 24 hours
- Within the past few days
- Within the past week
- Within the past 2 weeks
- Within the past month
- Within the past 2 months
- More than 2 months ago


Notice how I’ve not only obscured the “right” answer, but I’ve also added time ranges beyond my drop-dead time range of 2 weeks. A less-than-genuine participant, especially one with some experience, might intuit that “more than 2 months ago” is going to be an “incorrect” answer. But they’d likely then assume that “within the past 2 months” could be okay, couldn’t it?

You can go even further to validate your participants’ responses. Ask them to tell you about their most recent purchase in an open-ended question. If you’re working with a platform like dscout, or collecting media entries, ask for a picture of their receipt (minus any identifying info) or a screenshot that shows what they bought.

2. Check your assumptions and ladder your questions.

Start with the big pieces, and then narrow them down. Don’t jump right in to asking your participants about the last time they wore their socks inside out without making sure, first, that they wear socks.

For a less ridiculous example, many of my clients assume their participants shop online and jump straight to something like: “What online stores do you shop at regularly?” This leaves participants with no choice but to lie, or bend the truth, if they don’t regularly shop online.

(Also, what does “regularly” mean? On that note, what does “shop” mean? Operationalizing your terms is important, too!)

Question your assumptions. Here’s where I might start:




How often do you shop online? By “shop,” we mean browsing for items with the intent of making a purchase.

- Every day
- A few times a week
- Once a week
- A few times a month
- Once a month
- Every few months
- Less often than that



Or:




What’s your preferred method of shopping?

- Shopping in brick-and-mortar stores
- Shopping on the Internet
- It depends on the situation


Notice how I didn’t say, “shopping offline” and “shopping online.” This might be a personal bias, but I find that using “offline” makes “online” the default, and that subtly implies that online is the “right” answer. Again, try not to telegraph your intentions!

3. Keep it short and sweet.

Let’s start with the beginning. Keep it short. This is where empathizing with your participants becomes essential.

Would you want to answer a 55-question screener, with 20 open-ended questions, for no compensation? I don’t think so. 

Do you want to spend the time dissecting your habits and attitudes in a screener? Let me answer that one for you, too: no, you don’t!

Asking a ton of questions in your screener may provide you with a ton of data. But, at best, it leaves a bad taste in potential participants’ mouths.

Asking a ton of questions in your screener may provide you with a ton of data. But, at best, it leaves a bad taste in potential participants’ mouths. As a researcher, I would eat data if I could, and knowing as much as I can from a screener appeals to me. But survey fatigue is a huge problem in qual and quant research. And making your participants question: “Am I getting paid for this?” or complain “This is taking too long,” is unfair to them. 

At dscout, we recommend capping screeners at 15 questions from researchers.

4. Ask yourself: are your demographic questions really essential?

Okay, this one will definitely be controversial, but ask yourself: do you really need to know all of your participants’ demographic information?

If someone were to ask you—with no other context or explanation—what your income was, what would you say to them? Demographics, especially the more sensitive questions like income, can make people defensive and uncomfortable: “Why do you need to know this information?” “What are you, the government?” “I’m not telling you that.”

More to the point, don’t forget that behind all the fleshy stuff, we’re all made of the same fabric; humans are humans. If you’re assuming that a certain demographic doesn’t use your product, maybe you should question why you’re assuming that.

5. Get the terminates/DQs/KOs/etc out of the way ASAP.

Those questions where the “wrong” response tells you a participant isn’t the right fit for your research? I get you probably need them. But try to get them out of the way as quick as you can. Remember that potential participants are spending their precious time on your study—how amazing!—-for free at this point. And if they’re getting terminated, then they have no ability to recoup any of their opportunity cost for that time.

Participants generally realize, too, when they’ve been “terminated”—as much as we all try to hide it—and it often makes sense to them. It’s okay to not be the right fit for a study.

You can empathize with the participant, or the next researcher who works with this participant. Regardless, try to think of the human either way.

But it doesn’t make sense to pour your heart and soul into an open-ended response about how much you care about mowing your lawn, only to realize that the study you were hoping to get into doesn’t want people who mow their lawn. Now, that participant is going to be a little bit more cautious about how much effort and time they put into the next screener they’re in. You can empathize with the participant, or the next researcher who works with this participant. Regardless, try to think of the human either way.

At dscout, we recommend up to three knock-out (terminating) questions, and we recommend keeping them at the beginning of your screener.

To sum it up:

When you build a screener, building empathy is your goal. Because in the end, understanding the feelings of our users is the goal of all user research. So why not establish that compassion from the beginning?  

Emilykuhn Portrait 1530304699
Emily Kuhn

Emily Kuhn is a member of the Customer Success team at dscout. She enjoys making research more accessible to non-researchers, spending time outside, and hanging out with any cats she can find.

Curious as we are about what makes people tick?

Get new People Nerds articles in your inbox.