Triangulate User Research Data for Better Outcomes
Sometimes you don't have the luxury to work with large sample sizes. Using triangulation in a strategic way can help make up for that.
If you’re like the majority of user researchers, you’ve probably heard the maddening phrase:
"You only spoke to seven people?!"
Or maybe you’ve used one of my tactics and stopped using numbers, replacing them with percentages. But still, the question comes up:
Stakeholder: "How many users did you talk to?"
Me: *Hesitating* "Seven..."
Stakeholder: "Only seven?!"
Same outcome, different pathway.
It would be wonderful to speak to more users. Unfortunately, sometimes it's impossible given timelines.
I once told a stakeholder how long it would take to reach "statistical significance" in sample size by speaking to 25 users—and he spit out his coffee. It was about four months longer than the timeline budgeted for the project.
So, how might we balance small sample sizes and encourage stakeholders to take our research seriously?
One excellent method is triangulation.
What is triangulation?
I re-discovered triangulation a few years ago. I had used this concept a lot during my academic research, but it fell to the wayside when I got into user research.
Triangulation means you use multiple data sources to strengthen your insights and findings. These data sources can be primary (information you collect directly) or secondary research (information you don’t collect directly).
For example, a primary research source would be speaking to users in a 1x1 interview. A secondary research source means looking at app reviews or product analytics.
You can do triangulation in two main ways:
- Use a combination of primary and secondary data
- Use two (or more) primary data sources
Technically, you can use multiple secondary data sources to triangulate, but it’s always better to have a primary data source. This is because primary data is much more reliable and can be better controlled.
For instance, if I speak directly to people and ask them questions I need the answers to, I’m more likely to get reliable and relevant data than if I comb through reviews looking for answers to my questions.
However, both primary and secondary data can help you fortify your qualitative insights when combined. The goal behind triangulation is to get a holistic picture by pulling from multiple sources.
When you take this mixed-methods approach, you can go beyond your five users to better understand the what and the why.
The goal behind triangulation is to get a holistic picture by pulling from multiple sources.
Founder, User Research Academy
Ways to triangulate qualitative data
The most common way I have triangulated data in the past is by using primary and secondary data together. I use this method because it is faster than combining two primary sources.
For instance, if I’m running a usability test, I’ll look at product analytics and an app like FullStory to see if the insights I've found from the test match up (or not!) with the other data sources.
This approach goes faster than conducting a usability test and following up with surveys and interviews. However, I love to combine two primary sources if I have the time.
How do you know what to combine?
I typically think about a mixed-methods approach when triangulating data. This approach means trying to combine qualitative and quantitative data through triangulation.
For example, if I run a qualitative study like 1x1 interviews, I will complement that with quantitative data—either from a survey (two primary sources) or reviewing usage data (a primary and secondary source).
I attempt to focus on mixed methods because it helps with the common problem of small sample sizes. If I run a usability test with only seven people, but I can back up my insights with usage analytics from hundreds (or thousands) of people, colleagues are much less likely to harp on my small sample size.
Beyond that awkward exchange, triangulation using a mixed-methods approach does wonders for us as researchers. When combining data sources, you can see and find more about your users. You can also check your biases, assumptions, and insights against a benchmark.
A case study in triangulation
I ran a study focused on 1x1 interviews to understand choosing end-of-life care. Through these tearful sessions, I learned there were many back-and-forths before people came to the website to understand their options.
Until I looked further into FullStory and product analytics, I hadn’t uncovered that people took a long time on the website. Once they went to the website, I assumed they were quick to choose their next steps.
But there were a few other stops on the website. People reviewed the options with family members or even shared the link via text or email to others. I also learned that the average time between the first visit to a website and the time they chose an option was quite long, due to procrastination.
Using these different sources gave me an even deeper understanding of the users beyond what they told me during the interviews.
When is triangulation not useful?
As great as triangulation is, it has some disadvantages. They include:
Triangulation may be time-consuming. Instead of just conducting a usability test or interviews, you add extra work when you do triangulation.
This extra work can be worth it, but you have to think of your timeline before delving into triangulation. One time I had the fantastic idea of triangulating a usability study with customer support tickets.
I thought it would be quick and easy. Instead, it took me more time to comb through the tickets and find relevant data than it did to run the usability test.
Another common problem with triangulation is confirmation bias. Let’s say you get your qualitative study findings and want to triangulate this with usage data or support tickets.
Suddenly, you find yourself looking specifically for data that support your insights—and subconsciously ignoring what disproves your findings. Tunnel vision.
It happens to all of us, as aware as we can be. When triangulating, begin the process before your results are concrete or enlist a partner to help check your biases.
Confusing or disproven results
One of the worst parts of triangulation is when the results are confusing or disprove what you found. Now what?
As sucky as this might feel at the time, you did a good thing in catching it early. Confusing or false results can lead to really poor decision-making. If you find your triangulating data sources clashing, go back and do more research.
For example, when I find my triangulation pointing in two opposite directions, I re-evaluate the qualitative side. I look back on the questions I asked and the conclusions I drew and often go back to the drawing board.
I ask myself:
- Were we asking the right questions?
- Were our goals aligned with the conversations?
- Can we get this information from the method we used?
- What other methods can we use instead?
- Where is the data getting confused?
Common ways to triangulate data
Since I’m talking primarily about method triangulation, there are some common ways I have triangulated data in the past:
- Desk research and 1x1 interviews*
- 1x1 interviews and product analytics or FullStory
- Diary study and 1x1 interviews
- 1x1 interviews and a survey
- Usability tests and customer support tickets
- Usability tests and product analytics
- Usability tests and app/product reviews (or complaints)
- Heuristic evaluation and benchmarking
- Stakeholder interviews and 1x1 interviews
- Usability test and tree test (or card sort)
- Diary study and a survey
- Satisfaction or usability metrics and 1x1 interviews and usability tests
*1x1 interviews can refer to multiple approaches such as generative research, mental model interviews, jobs to be done (JTBD), etc.
Other data sources to look at:
- Customer support calls (or listening in)
- Competitive analysis
- UX scorecards
While the above examples primarily look at two methods, you can triangulate with more than two sources.
For instance, if customer support tells us users struggle with a particular feature, we can look through usage data and conduct a usability test. Additionally, if 1x1 interviews reveal unmet needs, we can conduct stakeholder interviews and a survey to discover if we've heard this before.
Or, if our product analytics show a decrease in satisfaction, we can look through app reviews and follow up with a hybrid interview and usability test to assess the problem.
There are multiple ways to approach data triangulation. The best part is that you can be creative! When I dove back into triangulation (and when my timeline permitted), I learned more about our users.
On top of that, I felt much more confident about my reports and recommendations to teams. I heard fewer questions about my sample size and found teams taking more action on my insights.
Applying triangulation in the right circumstances can improve your results and the satisfaction around your findings.
Have questions about using dscout for research? Let's talk!
Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs.
To get even more UXR nuggets, follow her on LinkedIn, join her bi-weekly newsletter, or read more of her work on Medium.
Subscribe To People Nerds
A weekly roundup of interviews, pro tips and original research designed for people who are interested in people