May 12, 2022
May 12, 2022
Getting solid insights from a research project is the dream. There is no better feeling than being able to write and present these compelling insights to your team and have them go off into the world of bettering an experience.
But sometimes as I’m writing my insights, doubt begins to creep in. How do I know that this is, in fact, a true insight? What if this is a false positive or a false negative?
Below, I’ve gathered strategies to quell these anxieties and fears and increase the likelihood that your research insights are on point.
Many teams believe that my insights will give them the absolute answer to a problem they are trying to solve. However, I’m constantly reminding people that we cannot rely on user research to give us a concrete answer. This is especially true when it comes to qualitative data.
If we believe our insights are the 100% truth about a population, we won't use them wisely. Always remember that we are dealing with humans. Humans don't fit into the boxes we want to put them in. There are many confounding variables for people, and we need to keep that in mind.
User research is a guiding tool to help us think creatively with people (customers, non-customers, users, etc.) in mind. It is a data point to help teams make better decisions moving forward, and ensure they heavily consider unmet needs and pain points during the decision-making process.
Remind your teams that you are here to bridge the gap between users and the organization and help make decisions, not give a final yes/no answer.
Always remember that we are dealing with humans. Humans don't fit into the boxes we want to put them in.
Nikki Anderson-Stanier
Founder, User Research Academy
Interviewer biases are those we experience as user researchers. Although they are unconscious, they are easier to mitigate since we are more in control of them. However, we must accept that we all have and frequently encounter these biases. Here are the most common biases that can skew your results.
Definition: This is one of the most common and fickle biases a user researcher will encounter. We like data, quotes, or insights that confirm our existing hypotheses/beliefs and tend to ignore whatever challenges them.
Biased user research example: "How often would you use the 'transcribe video to text' option in Netflix?"
Rewrite: "Tell me about the last time you used a 'transcribe video to text' service." -> "In what situation did you use it?" -> "What was that experience like?"
How to avoid this:
Definition: The more we invest in something emotionally, the harder it is to abandon it. We are much more likely to continue an endeavor, continue consuming, or pursue an option if we've invested time, money, or energy.
Biased user research example: "Did you stay with your current video streaming service instead of switching to Netflix because of the price increase?"
Rewrite: "What are some concerns about switching to a different video streaming service?"
How to avoid this:
Definition: We believe inevitable "streaks" or "clusters" in data are non-random due to our inability to predict the amount of variability likely to appear in random data samples. In essence, we see patterns where they don't exist.
Biased user research example: We set out to understand why people unsubscribe from Netflix. We believe it is because of a recent price increase. After five interviews (out of the planned 15), we hear the first four people have mentioned the price. We believe this is sufficient data to rework the pricing strategy.
Rewrite: Finish the interviews before making decisions on why people are unsubscribing. Then, look at all of the evidence. For example, although many people mentioned price, there may be other potential—and more important—reasons for unsubscribing.
How to avoid this:
If you want to get insights about how parents-to-be plan the arrival of their child, it would be best to speak to those who will be parents soon, right? Or, if you want to understand the pet adoption process, you want to talk to those who have recently adopted a pet.
It feels obvious, but there are times when we can be in such a rush to talk to users that we don't think of the exact criteria we need to get the best insights. Or we try to take a shortcut and do internal research with employees, which we should only use in very specific cases.
However, if we do research with the wrong participants, we end up with incorrect information.
For example, I worked at a hospitality company and researched how housekeepers get tasks from our digital platform. The only participants I could get a hold of were the heads of housekeeping, but I figured this would be fine. Spoiler: It wasn't.
These participants didn't use our platform and rarely performed housekeeping tasks. So I was forced to ask them to hypothesize about their colleagues, leading to wobbly, secondary information.
When we don't talk to the right people, our insights are significantly less likely to be valid.
Whenever we deal with qualitative research, we have small sample sizes. Sometimes, small sample sizes can call into question whether or not an insight is "important enough." Here are two ways I mitigate those fears behind small sample sizes:
Mixed methods research combines qualitative and quantitative data to get a holistic picture of your customers. Quantitative research helps us understand the "what," while qualitative research is about understanding the "why." Often, we look at one side or another, but honestly, we need both. There are three main ways you can combine qualitative and quantitative data:
Combining qualitative and quantitative research methods allows you to feel more confident that you highlight the most valuable and critical insights.
Triangulating data means pulling data from different sources. One of the methods of triangulation is using the above mixed methods approach. However, there are other ways to triangulate data. Here are some I use:
Pulling other data allows you to understand if and how your insight has come up in different contexts and add to supporting evidence.
There may be a time when you stumble upon what you feel is gold. However, only one person mentioned that golden nugget of information. Should you report it? Check out this article for more advice on determining the validity and importance of one-off insights.
In an ideal world, I always say you should take double the amount of time of the interview to synthesize it. That means you should spend two hours synthesizing a one-hour interview.
We don't always have time for the ideal, but we do have to synthesize. Skipping this step can land you in a world filled with unreliable insights. Rushing through analysis can lead to superficial and questionable insights.
If you don't have time for full-blown synthesis at the end of a project, consider a small debrief after each session.
Overall, there is no right way to know if our insights are perfect. Instead, we can take the above strategies into each project and hugely increase the likelihood that we are reporting the most vital and valuable insights to our team.
Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs.
To get even more UXR nuggets, check out her user research membership, follow her on LinkedIn, or subscribe to her Substack.