May 16, 2023
May 16, 2023
I get excited about 1x1 interviews. I love them. I could conduct 1x1 interviews for the rest of my life and never get tired. However, nothing is worse than settling into a 90-minute conversation only to find out the participant has absolutely no input for the topic you've planned to discuss.
I remember sitting in one session, knowing I had taken two steps backward with my stakeholders. I spent months proving user research's value and developing a generative research program.
Finally, we secured the budget, and I got started. In my excitement, I got straight to recruiting as quickly as possible and ended up with a few participants that could have been more helpful.
From that moment on, I promised myself that I would make sure my recruitment was on point and that I wouldn't rush through this process. The one thing I changed that made all the difference in my recruitment process was ensuring I had an excellent screener survey.
Screener surveys are an excellent tool for beginning research projects. You use short surveys to qualify participants and ensure you get the best fit for the information you need.
One big mistake I used to make was only asking for demographic information in my screener surveys. Since I came from an academic background, demographic information was a must in many projects. I didn't focus on behavior or habits in those projects.
Unfortunately, when I just asked for demographic information (ex: gender, age, location), I landed in the same scenario I described above. The participants might have fit a specific demographic, but they couldn't give me the needed information.
Only including demographic information became a knockoff effect not only for less ideal sessions but also for superficial deliverables. Since I didn't have in-depth knowledge about people's characteristics and I also needed more consistency across participants, my personas were shallow and scattered.
Instead of these personas having consistent needs, goals, and pain points across participants, I could only guess on similar groups and mainly bucket them by demographics. Unfortunately, with this superficial information, I didn't empower my teams to make better decisions, so my personas gathered dust in a Google Drive Folder.
Once this happened (a few times), I realized that it was important to write good screener surveys for a few different reasons:
As essential as screener surveys are, they can sometimes take time to write properly. It took me some practice to understand how to approach screener surveys optimally. Now, I have a process I love to go through every time I write a screener survey.
Ideally, your screener survey is about 5-10 questions. Once it gets longer and more complex, you can experience drop-offs.
Before you even think about recruiting, list out all of the different criteria an exemplary participant would have. During this stage, think about the questions you want to ask them to achieve your research goals and the types of information you need them to give.
Whenever I am brainstorming different criteria, I always ask:
Once you brainstorm the requirements you need, write at least one question per criterion you identified as important. With this, you can target the best participants for the information you need correctly.
Ideally, your screener survey is about 5-10 questions. Once it gets longer and more complex, you can experience drop-offs.
Writing a screener survey question well can take some time, but it’s worth it when you get stellar participants.
My favorite types of screener questions are multiple-choice because I can get precise with the question and gather important information about the participant.
Here are some question types to avoid:
You want your screener survey questions to be as precise and straightforward as possible, with no room for interpretation. Including any vague information makes it difficult for people to respond properly. Ensure you have defined and removed vague information.
People have a hard time talking reliably about future-based behavior, so focus on past behavior as much as possible.
Leading questions will influence people to answer in particular ways and can result in participants who aren't able to answer your questions well.
A quick shout (mainly because it is a pet peeve of mine!), take a look at any numbers you use in your multiple choice questions. So often, I see an overlap in numbers, making it impossible for a participant to answer. For example:
"How many people currently work at your company?"
Within this question, people could be forced to choose multiple answers. If 150 people work at my company, do I choose option one or two? Instead, reformat the numbers so it looks like this:
✔ "How many people currently work at your company?"
One of the most frustrating experiences as a participant is getting halfway through a screener and then being screened out. Ideally, you ask for the broadest qualifying information at the start of the screener and then narrow it down to more specific information.
Using a combination of open and closed questions helps ensure you get the best participants. For example, a 60/40 mix is safe, where 60% of questions are closed and 40% are open-ended.
Open-ended questions help you see how a participant would answer in their own words without priming them to respond in a specific way. These questions are also helpful if you aren't sure what information to include in a multiple-choice question.
Adding an open-ended question to your screener survey can give clues on how much insight a participant will provide in your study. One-word responses, illogical rambling, or cagey answers can all indicate that a participant may provide a low return on investment.
I have seen many screener surveys that don't allow me to say something that is not applicable, so I am forced to answer a question incorrectly.
To avoid this, make sure you include fields like:
Including an "other" option accompanied by an open-text field is also great practice. This allows participants to answer within their own words and experience.
The last but most crucial step! I have been overly excited about recruitment (or rushing) and sent out a screener survey without first getting feedback. As a result, there were several typos, and I worded some questions confusingly.
After that mistake, I always tested my screener with colleagues to ensure it was as clear, simple, and straightforward as possible.
We own a plant shop in Brooklyn, New York, selling groups of exotic plants. We have been posting our plants on social media, and people from around the United States are contacting us and asking if we have an online shop.
We want to consider this an option but are still determining what people want or expect. We want to conduct user research to:
While also looking at the goals, I would then ask myself:
What gaps in knowledge do you have that you need your participants to fill in?
What behaviors do you need to understand more?
What habits are you trying to target?
I would create a screener survey with this information to ensure I target the right people:
Preparing a recruiting screener can prevent potentially wrong participants from being part of your study and increase the ROI of your research sessions.
When in doubt, screen! It always pays off.
Ready to hold auditions and recruit engaged, expressive participants for your next project? Learn more about dscout Recruit.dscout Recruit.
Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs.
To get even more UXR nuggets, check out her user research membership, follow her on LinkedIn, or subscribe to her Substack.