April 7, 2025
April 7, 2025
This article was originally published on February 13, 2024, and has since been updated.
I've never met a user researcher who enjoys the recruitment process (but let me know if you are out there!).
A million things can go wrong during recruitment, leading to many problems.
For example, in my experience, recruitment has:
Recruitment has let me down many times, resulting in unfinished studies that never made it past the kick-off. And there was very little I could do about it.
The worst part of my own recruitment experience was when I messed it up. When it was entirely in my control, and I recruited the wrong participants or participants who weren't helpful to the study.
A few years ago, I was recruiting participants to study how they planned their last leisure trip. During the recruitment, I asked them the last time they traveled, but forgot to ask if they were the ones who planned the travel. As a result, several participants in I had wasted hours talking to people whose friends or partners had planned the trip.
In another study, I was looking for people who had used a particular feature on our platform. I recruited power users whose feedback was utterly unhelpful to our entire user base.
Finally, yet another time (but not the last), I got burned by respondents looking for incentives. They answered my screener questions in a way that made them look like a good fit.
Whenever recruitment goes off-track, there is a massive risk to the rest of the study. The recruitment muddled my results in all the above examples (and similar situations I was in), making my findings unusable.
Since recruitment is such an integral part of the process, how do you ensure you get quality participants?
You need great participants for deep insights and findings. Without them, you can engage in misinformation and misdirection for our teams.
It can be challenging to get the exact right participants you need. Even if you go through all the measures, it's never guaranteed the participant will be perfect. However, there are steps you can take to increase the quality control of your recruitment.
The worst feeling is when you get a participant who hasn't had the experiences or doesn't have the information you need for your study.
As mentioned, when I conducted the study about planning travel, it was extremely disappointing to interview participants without that in-depth experience. Instead of asking all the step-by-step questions I had planned and getting the information the team needed, I had to scrap the interviews. Secondhand data is not data you want to use.
The next time I recruited, I made sure to be incredibly thoughtful about the experiences participants had to have, the features they had to use, and the types of behaviors they had to exhibit.
Instead of focusing on basic questions and demographics, I dug into my ideal participant's above aspects. During this stage, think about the questions you want to ask them to achieve your research goals and the types of information you need them to give.
Whenever I’m brainstorming different criteria, I always ask:
If I were to redo the travel study, some of the criteria I would have listed are:
As you can see, this is a lot more than asking someone if they’ve traveled in the past six months. Before even recruiting, think about the information you need—starting with a research plan is key—and the type of behaviors participants must have undergone to give you that information.
My number one piece of advice is to take your time on this step and ensure the team has aligned on the expected information and criteria.
As you can see in my criteria, I cite someone who has planned and taken travel in the last six months.
User research is based on participants telling us stories of their past experiences. Therefore, interviewing participants with recent experiences means that you’re getting rich data that can lead to profound insights.
I love to interview people who have had an experience in the past month. If the topic isn't relevant or it’s impossible to find people who have had a particular experience that recently, I’ll extend it to three and then six months.
It might not always be possible to get a recent experience. Some experiences naturally happen in longer intervals, but try to get people as close to the experience as possible.
I didn't use a screener survey for my recruitment for a while, which led to many problems. I lacked consistency in my participants and didn't have precise data to understand whether these participants were the best match. A screener survey gives you objectivity when screening participants for your study.
Once you brainstorm the criteria you need, you’ll write at least one question per criterion you identified as important. With this, you can target the best participants for the information you need correctly.
Ideally, your screener survey is about 5-10 questions. Once it gets longer and more complex, you may experience drop-offs.
Using the travel example criteria above, here are some screener questions to use:
By asking screener questions that align with the criteria you need, you’ll ensure participants who can give you this detailed information. By avoiding yes/no questions and going for more robust data, you get additional information to narrow down recruitment even more if you want to.
Combining open and closed questions helps ensure you get the best participants. For example, a 60/40 mix is safe, where 60% of questions are closed, and 40% are open-ended. Even better if you can get to 70% of questions closed and 30% open-ended.
Open-ended questions help you see how a participant would answer in their own words without priming them to respond in a specific way. These questions are also helpful if you aren't sure what information to include in a multiple-choice question.
Adding an open-ended question to your screener survey can give clues on how much insight a participant will provide in your study. One-word responses, illogical rambling, or cagey answers can all indicate that a participant may provide a low return on investment.
For example, in the travel study, I could have asked open-ended questions like:
As you can see, my questions are open-ended but precise, giving the participant my expectations of what I need from them.
I didn't over-recruit or have participant back-ups for a long time until I started to get consistently burned by the experience of being "just a few participants short."
I didn't know how to over-recruit for a long time because the concept didn't make sense to me. How would I tell participants they were backups? Would I give them their incentive?
Then I realized I had to make a choice and create backup guidelines that were as clear as possible. I had about two or three backup participants for each test. In the email, I told them they were on a waitlist and might have an interview if someone were to cancel.
If they had the interview, they would receive the total incentive amount. If no participants canceled, I gave them a fraction of the incentive for being a waitlisted participant. I also tried, when applicable, to sign them up for another test.
In the past, I’ve been overly excited about recruitment (or rushing) and sent out a screener survey without first getting feedback. As a result, there were several typos, and I worded some questions confusingly.
After that mistake, I always tested my screener with colleagues to ensure it was as clear, simple, and straightforward as possible.
Recruitment is a crucial part of the research process and one you have to be very intentional and thoughtful about. By speaking to the right people, you’ll capture the best information possible to enable your teams to make better decisions.
Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs.
To get even more UXR nuggets follow her on LinkedIn, or subscribe to her Substack.