July 2, 2020
July 2, 2020
As researchers, we have all heard someone pipe up and say, "But you only spoke with seven people!" Sigh. Eye roll. Clenched teeth.
It is not your stakeholders’ fault that this question comes up. Companies are asked to be “data-driven.” For many, that translates to backing up facts with large samples. Statistics and data have existed much longer at companies than qualitative research and patterns in conversations.
So what happens when you are on the receiving end of this question? I've gone blue in the face trying to explain: “We only need to talk to X amount of people before the findings become repetitive." However, it doesn't seem to stick.
Sometimes, the most effective thing to do is to strengthen the perceived validity of our qualitative data with “the numbers.” Now, I will be cautious not to say, "quantifying qualitative data." Qualitative data should not (and can not actually) be quantified; quant data is a separate entity. However, we can share trends we found in qualitative research and pull in quantitative data can help “back them up.”
I understand why people are hesitant to make vast generalizations about a product, company, or feature based on seven people. How can we take action to make stakeholders feel better about the “danger”?
Qualitative data should not (and can not actually) be quantified; quant data is a separate entity. However, we can share trends we found in qualitative research and pull in quantitative data can help “back them up.”
Nikki Anderson
After the research is when most of the real validation can take place. I have used the following methods to help stakeholders, and teams, feel more confident in making decisions based on user research findings.
A great technique to establish validity is to seek alternative explanations of the research results actively. If you are researching in a vacuum, you only have your own perspective. By bringing others into the mix of analysis, you get a more diverse range of interpretations. Bonus points if you can include people who have been at the company longer than you and those who just joined.
When multiple perspectives are going into the analysis, you are less likely to interpret or assume incorrectly. You can come to a consensus on what each finding means. If you can exclude other possible scenarios or interpretations, you can strengthen the validity of the insights.
There are two surveys I commonly use after a qualitative research project to get numbers behind my insights:
The unmet needs survey is my favorite survey to use for validating qualitative data. This survey comes from "Outcome-Driven Innovation" and "Jobs to be Done,” but you can use it to back up findings from any methodology.
To create the study, you take the “user needs” you gathered from your qualitative research and craft each need into a statement. Then, in your survey, you ask: 1) “How important is meeting this need to you?” and 2) “How satisfied are you with the current ability to meet this need?”
Each statement should have:
As an example, let's say you heard two needs in particular:
Now you create statements based on these needs. Remember: direction of change + success metric + object
I am careful with feature request surveys but must admit I do utilize them in certain circumstances. People generally don't know what they want, so I never take exact feature requests from a user interview.
Instead, I use this when the team has a problem to solve and has several feature ideas. After idea generation, I will use a survey to ask users which ideas make the most sense. This survey helps the team focus on two of the top solutions to then test with users.
For example, you hear many users are struggling with properly budgeting a vacation. Your team comes up with several ideas, such as:
Instead of working on prototypes for all these ideas, or deciding internally, you can create a survey to ask users to pick which is most important to them.
These two measures, the Single Ease Question (SEQ) and System Usability Scale (SUS) are a great way to see where the user experience is breaking down. For instance, during qualitative research, if we heard that the check-out experience is painful, we could place one of these surveys in the flow. This survey could help gauge, on a broader scale, if other most users are feeling the same.
Using whatever form of analytics your organization is gathering on user behavior is a low-effort and high-impact way to validate qualitative findings. This is especially true of usability testing. If you find, in your usability test results, that users are struggling with completing a form, you can use analytics to prove your case.
Open the company's analytics platform and look into the number of errors on that form, click-through rate to the next page, or bounce rate. In my experience, usability tests usually indicate a more significant problem that you can find in analytics.
In the research world and outside of product/tech, this is called respondent validation. This technique involves testing initial results with participants to see if they still ring true. Beta testing is an excellent way to initially test findings and ideas with a wider (but small) audience.
Your research findings become more reliable if responses become more consistent across a more substantial number of participants. You use beta tests to see whether they are using the product/feature, how they are using it, and what bugs, improvements, or innovations come up.
If beta testers aren't using certain features, chances are your actual user base won't be either. Beta testing can help you see how a product or feature would react in the wild, and it is a great way to test with a small but valid sample size.
Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs.
To get even more UXR nuggets, check out her user research membership, follow her on LinkedIn, or subscribe to her Substack.