When Should I Report a One-Off Insight?
Though one participant’s opinion isn’t statistically significant, there are times where a piece of feedback may point to a larger issue.
A few years ago, I conducted a usability test with ten participants. We reached saturation with our audience, repeatedly hearing similar pain points and insights. In fact, we had reached saturation at around the seventh participant, but I had a small niggle causing me to continue with the remaining three participants.
One participant had mentioned something that twigged my interest. While the participant talked me through their app usage, they brought up a problem they consistently faced.
Unfortunately, at the time, no other participant mentioned the same pain point and, the remaining three, despite my hope, didn't mention it either.
That user's pain point kept me up and weighed heavily on me as I created the report. I debated whether or not I should include the insight in the deck. After years of defending qualitative research and preaching about sample size, it felt strange, even unprofessional, to include this one-person insight.
In the end, I didn't include the one-off insight, fearing I would get discredited and, worse, a sea of eyes rolling into the back of heads. At that point in my career, I wasn't confident enough to stand up for the one-off.
However, that one-off started to pop up more throughout different studies, and, retrospectively, I wished I had made the point a few months ago and saved us some time and effort. Of course, it doesn't always happen this way but, when it does, that 20/20 hindsight gets to me!
Should we report on one-off insights?
Despite loathing yes or no questions as a practicing user researcher, I wish there were more simple answers to theoretical questions in the field.
I'd love this article to be short and sweet like, “Yes, go forth and confidently report on those one-offs! Put them in your presentations, and don't take no for an answer!”
But, as with most things in our industry, the answer is "it depends." There are a few things to consider when you are deciding whether or not to bring our one-off insights into the light:
- The severity of the problem
- The impact of the insight
- Why the problem occurred
Let's dive into each of these areas.
The severity of the problem
This point is the most crucial when determining the importance of one-off insights. How severe is the problem you observed? Whenever I am conducting usability testing or even generative research, I give a score to the issues I encountered:
- Severe: A user failed a task, could not use a critical part of the product, gave up on a task, or lost data.
- High: A user struggled with the task and was barely able to complete it. A user is limited in completing tasks and using the product.
- Medium: A user can use the product and navigate tasks, but it takes some cognitive load.
- Low: A user can circumnavigate the issue and not stop a user from completing a task. This could also be a cosmetic issue.
When it comes to one-off insights, I would report it if the issue falls within the severe or high categories. The severity of a problem does not correlate with the frequency of an issue.
A research session may uncover an infrequent but severe issue that could significantly impact users' experiences. My first course of action when determining whether or not to include an insight is understanding the severity level.
The impact of the insight
The next part of the puzzle is understanding the potential impact of the insight, whether it be positive or negative. Since we are usually working with small sample sizes and qualitative data, we aren't looking for statistical significance.
I used to try to run statistical tests to try to determine the exact impact. Spoiler: this didn't work. There are some tables you could use to determine how many people you'd need to interview to see a problem crop up more than once, but it usually made the sample size unattainable for studies.
Instead, because of the small sample size we use for qualitative research, the person could very well represent a broader and significant audience. When we couple this with the severity level, the issue becomes more impactful. If we have a severe problem that could impact a broader population, this insight becomes more critical to fix.
Since we won't know precisely how many people it could impact, there are two things I brainstorm:
- Who else could this impact? How might this problem impact segments, personas, or roles?
- Is the insight about one participant disagreeing with the rest, or are they encountering a real problem in their flow?
Why the problem occurred
The final consideration is why the insight or issue occurred. Qualitative research is all about getting to the root cause and why, which is essential for one-off insights. Sometimes, with usability testing, we can focus on what happened, but why it happened can compel a one-off insight into a "must-solve" issue.
I'll give a real-life example. I spoke to a group of users about a credit-card scanning feature in an app. Instead of inputting their credit card information, they just had to "scan" their card via their camera app, and the information would pre-fill.
Great, right? Everybody did the task, but one person hesitated as they scanned the test card. I asked them about this, and they said, "Well, I don't like to take photos or anything of my credit card." When I dug deeper into why the person revealed they wouldn't trust something like this.
The rest of the team wanted to discard this as clearly the participant didn't understand we weren't taking a photo but were gathering information. And all other participants understood and completed the task.
The team went ahead with the project, but the feature completely flopped, failing and causing some users to lose a sense of trust in our company.
What to do with one-off insights?
If you decide to report them, I can't lie; you might get push back from your team. Many data-driven colleagues will look upon your sample size of one and believe it is not enough. But that is where the considerations above came into play.
For a while, I used to use the following analogy for small sample sizes: "If we see a person pushing a door instead of pulling, how many other people do we need to watch do this until we fix the problem?"
While this analogy is fine, it doesn't capture an urgency or severity. Colleagues have said, well, what if one person wants a revolving door? Or sliding? Or no doors at all? What if that is the only person who pushes, and then we mess everything up for all those who pull?
So, I took the analogy a step further. I needed to demonstrate that the reason for reporting the one-off is about severity: "Imagine if one car falls off a bridge (assuming the driver is unimpaired). How many other cars do we need to watch falling off a bridge until we fix the problem?"
That analogy helped. If you decide not to report on one-off insights, there are a few other things you can do:
Keep track in a spreadsheet
When I discovered the importance of one-off insights, I created a spreadsheet. Then, I added all of the one-offs from research projects, including some lower, more cosmetic issues. If I heard a one-off during a session, I put it in the sheet.
Over time, I could see which one-off insights became more frequent issues to solve. These issues then moved off the spreadsheet into a relevant report.
This spreadsheet method was a great in-between for me. I was able to keep the one-offs in mind but not report on every single one.
Do more research
Another great option when encountering a one-off insight is to use it as a jumping-off point for another project. If I could have redone the credit card scanning project, I would have:
- Fixed some of the bugs and usability issues users encountered
- Do a quick project based on the last participant's hesitations
This project wouldn't have to be a large qualitative study. I could even imagine a quick survey that asked about attitude or sentiment toward the feature. Or even an unmoderated usability test that ran over the weekend to see if we found more hesitancy across a larger population.
If the insight is severe, with a potential for a large impact, maybe you don't need additional research. However, if you feel stuck and uncertain, go out and research more!
The judgment is up to you and the team
As you might have noticed, there isn't a one-size-fits-all answer for these types of insights. It will be up to you and your team to determine the different variables mentioned above to understand the importance of this one piece of data.
Use the severity, the impact, and the why to help you understand the importance of the insight. And build up your confidence in reporting your results.
There are many times I wish I was more confident and stood up for these insights! They could lead to important discoveries or fixes that save your team time, effort, and money. So don't just discard the one-offs; use the data you have to make a more informed decision!
Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs.
To get even more UXR nuggets, check out her user research membership, follow her on LinkedIn, or subscribe to her Substack.
Subscribe To People Nerds
A weekly roundup of interviews, pro tips and original research designed for people who are interested in people