Skip to content
Ideas

The Ultimate Guide to Mixed Methods User Research

Adding methods to your study also adds richness to your findings, expediency to your workflow, and opportunities to collaborate.

Words by Ben Wiedmaier, Visuals by Emi Tolibas

Think about the most rewarding projects you’ve run as a researcher—or the projects you hope you’ll run, and are confident will be rewarding.

You probably weren’t searching for minor website bugs, or asking about a niche preference on your mobile app.

Likely, your favorite projects involved dealing with some pretty thorny problems.

That’s because more often than not, rewarding research puts us thoroughly in the weeds. And mixed methods research is great for “in-the-weeds” research.

Mixing your methods helps you make more sense of complex questions (though it can also be used beneficially for even “simpler” inquiries). And while it gets a bad rap for being time-consuming or complicated, it’s actually a rather resource-light approach to getting accurate results—and crafting impactful deliverables from them.

In this guide, we’ll discuss the ins and outs of effective mixed methods research. If you’ve never mixed your methods confidently before, you should feel more than capable of doing so at the article's end. If you frequently deploy mixed methods research, this guide should help you fine-tune your process, brush up on best practices, and introduce a few new “mixes” to your regularly scheduled research.

Jump to


If you'd prefer to take in this material through a "micro-course," we talked tactics with some mixed methods experts on a recent webinar. You can stream it on demand here


Defining mixed methods

Also called “hybrid methods” or “multi-method,” mixed methods research involves blending approaches, question types, or tools for uncovering insights.

In practice, this involves tackling a question through more than one methodology, with the hopes that it’ll provide you a fuller understanding. Many mixed methods studies are designed with both a qualitative and quantitative component. A qualitative approach can provide more context or answer a “why” behind a trend unearthed in a quantitative study. A quantitative approach can help validate a behavior or trend unearthed from qualitative research.

It might help to ask yourself this: Would we benefit from another kind of data? Would that additional stream offer us a better view of the question or problem at hand? If the answer is “yes” to either, you might be best served for a mixed methods approach.

Advantages of mixed methods research

Just as samples give you more representative power, diverse methods can grant you a more powerful data stream.

Maybe survey data is better grounds for decision-making than no data at all. But if you’re able to pour through data from a survey, and an interview, and a media journal, you’re going to be a lot more confident that you’re basing your choices on valid insights.

Undoubtedly, there are still some questions and problems that can be addressed with a single method—but complex or ambiguous questions (often, more strategic ones) are usually best served by a multitude of data.

If your question is a strict evaluative, or strict usability question (ie. Should this button go here? Which design do users like better?), mixing or hybridizing methods isn't likely to generate new insights or answers.

But when you're tackling a question like "Should we build this?" or "What do people think of this?", mixed methods can be a boon for innovation and strategy decisions. Importantly, the approach is flexible and isn't too prescriptive (we'll get into some best practices later), so "doing" mixed methods work doesn't necessarily mean adding weeks and weeks to your timeline.

It might help to ask yourself this: Would we benefit from another kind of data? Would that additional stream offer us a better view of the question or problem at hand? If the answer is "yes" to either, you might be best served for a mixed methods approach.

Back to top

Misconceptions about mixed methods research

Despite the myriad advantages of mixed methods research, it’s widely underutilized. Often, misconceptions about the difficulty or resource requirements are to blame. So let’s dispel some myths now.

✘ It’s too complicated

Really, mixed methods research breaks complicated questions into manageable, bite-sized, pieces. This also makes it great for collaboration. Most methods don't require advanced degrees.

Yes, conducting a fruitful interview or observing a customer in-field takes some prep, but nothing that a researcher or insights person on-staff can't help with. One of the positive externalities of leveraging mixed methods approaches is the built-in collaboration opportunity: You may need the product, engineering, AND research teams to come together to make it happen. 

Usability lab studies inform a survey, which are then used to craft a tight interview guide. That process might involve multiple teams, which would align—or at least raise the visibility for—folks on the problem at hand. Very often mixed methods gives folks a chance to try something new, work with or loop in new stakeholders, or both.

✘ It’s too resource intensive

All too often, people think “more methods” means “more time” or “more money.” In reality, mixed methods research usually offers you a way of getting more done with less. Don’t have time to conduct and analyze as many interviews as you’d like? Conduct a few, then follow up with a survey to better validate your findings (more on analysis and triangulation later).

✘ It’s difficult to explain to stakeholders

We get it. Sometimes, you’re tasked with “sticking to what works.” We wrote a whole piece on tactics for getting stakeholders excited about new approaches—but the TL;DR is: mixed methods research generally leads to the exact sorts of deliverables stakeholders are looking for. Combining qualitative, empathy-driving, storytelling data with quantitative, “authoritative,” “impressive-sample” data makes for a truly impactful share out. So you lead with the end result. Say, “Here are the insights you’ll get out of this,” and then walk it backwards in terms of the best way of getting there.

Plus, if you can make the case that this sort of project design makes your work more collaborative, and more time and budget efficient—you’ll generally win them over.

Back to top


For a briefer breakdown of mixed methods research, read: Simplified Mixed Methods Roadmap: How to Marry Quant + Qual for More Insightful Results


Choosing your methods

First, and most importantly, there is no "right" way to mix methods. It's all about your aims, stakeholder needs, and timeline.

But it's useful to begin with the standard outputs certain methods produce, as those are what you'll be working with come time for iteration, analysis, and synthesis. Here’s a very broad look for reference (which obviously does not encompass an entire UXR toolkit).

✔ Surveys

Surveys typically produce quantitative frequency or attitude/perceptual responses with questions like semantic differentials, multi-selects, and Likert types.

✔ Generative interviews

(Generative) interviews are the richness-capturing-mechanisms. Think chunks of text, phrase, and winding perspectives (depending on the kind of interview, of course).

✔ Observational work

Observational work will offer environmental, situational, and researcher-perspective data, which can be useful when one needs to see the whole field, so to speak.

✔ Usability studies

Usability studies produce a different kind of quantitative data: usually time-to-task metrics, heat maps, and even eye tracking data. It's passively collected and not often offered by the participant.

✔ Co-creation

Co-creation work can produce traditional open-ended data from interviews or focus groups or participant-generated data like drawings, models, or other media.

✔ Diary studies

Diary work can be longitudinal, rich in description, and contain media like photos or videos if leveraging a research platform. This offers a user POV and exclusive moments.

Back to top


If you're ever at a loss for which method would best serve your research question, this guide to choosing the right methodology is a good starting point. 


How to design a mixed methods study

Each of the methods above hover at a specific “altitude,” in terms of involvement, scope, sample size, and longevity. Some can fly at multiple levels. Keeping in mind which methods fly at which levels offers a handy way to know when (and why) to mix.

✔ For going macro (high level) to micro (low level)

A standard mix or recipe can be a survey (see what's happening broadly, source trends, key terms, and opportunity spaces) and a diary, observation, or interview (dive deep into what the survey uncovered for clarity on the themes).

The deeper dive should clarify some things and offer new questions. So if time allows, zoom back out with a second survey on a different (or same if the study design calls for it) user group with newer, sharper questions. The cycle can repeat until analysis and synthesis starts to make sense, or when the deadline arrives.

This approach works well for product development, especially when the need space is undefined and evolving. Surveys offer some potential north stars and interview, diary, and observational work allows for the selection of one (or two) needs after the nuance is better woven in.

An example

You’re working on behalf of a free healthcare clinic. You want to know who is using their services, but more importantly, you want to get at non-users. How do you capture who's not using your product, your service, or coming through your door?

You could start by doing a large survey of every single patient they had to get some baseline data quickly. You might learn there’s some stigma attached to seeking services, or that reaching the clinic was not convenient for folks with certain barriers to transportation.

Then, to really make sure you’re understanding what networks the clinic isn’t reaching—what barriers exist to using their service and how that aligns with their current patients’ perceptions—you can follow up with interviews.

Reach out to the patients who were particularly eloquent from the survey and ask, "What were you thinking about when you were taking this survey here? What did you think about stigma? What was the role of that for you in seeking out free healthcare services? What should we really be asking to better reach your communities?"

✔ Micro to macro

This sequence is useful when incidence rates are low, the topic is sensitive, or a captive (but small) audience is the starting point. Here, nuance and context from interviews or diary studies create themes (e.g., How do non-customers understand privacy?) that are then pressure-tested with larger, more diverse samples via surveys or back-end "big data" collections (generally from analytics software, a customer success team, or data science team).

This is an effective way to make surveys and other higher-altitude methods more productive, as they're informed by on-the-ground nuance often overlooked—and impossible to capture—with 2D questions.

Innovation strategy or agile design thinking is a strong case for this sequence, as designs or concepts are iterated at the micro-level, only to be pressure tested at the macro level later.

✔ Iterative or reactive

When teams have more time, budget, or evolving stakeholders, methods may need to mix in both directions continuously, based on the feedback received from either one. Mixing methods—importantly—should be a single-stream process. That is, conducting multiple kinds of testing simultaneously robs a team of the learnings of any one. Methods should inform, sharpen, and improve one another. They shouldn’t be "done" to cover bases while asking the same set of questions.

There are times, however, when the linear—go up or go down—sequence just doesn't fit. In these instances, it's entirely appropriate to go stepwise: that is, launch one collection, explore the findings, and launch another collection (in any direction) that would improve the insight potential.

One interview guide begets another two follow up interview guides, which informs a usability study and finally a survey. Each set of "answers" produced by any one method is another opportunity for refinement. The direction taken should be dictated by the needs of the project, so reference those tool outputs above. There's nothing wrong or "bad" with being flexible and reactive in the deployment of mixes, so long as each is leveraged intentionally. Don't just conduct interviews to have done them.

Sometimes you might want to get recurrent data streams, providing continuous insights on a consistently relevant question. We wrote a guide to iterative/rolling research you might want to explore here.

Hacking mixed methods

If you really lack the time or resources to mix your methods, but want to give your qual research a little quantitative “oomph”, there are some low-lift sources of data you can layer on for more depth.

For example, you can:

✔ Conduct a benchmarking study

These combine both a quantitative and qualitative approach. In a benchmarking study, you are collecting metrics, such as time on task, task completion rates, and task usability—while also saving time to speak with the user about their overall experience.

✔ Conduct a heuristic evaluation

Count and rate the number of times a product violates a set list of usability heuristics (like Jakob Nielsen’s Ten Usability Heuristics). When you hear users are struggling with a certain area of a product, you can dive in and see how the product compares to these best practices.

✔ Use A/B Testing

When in doubt and unsure about how a change will affect metrics (such as conversion rate or usage), run an A/B test to really measure the impact of the change.

✔ Look at Google Analytics (or your company’s data analytics software of choice)

After you spot a trend in your qualitative data, go to Google Anaytics (or any other analytics platform) to see if you notice an usage patterns that validate or disprove the insights.

Back to top


For more on "quick-quant" methods you can use to back up your findings, head here.


An example case: Tech and Us

The problem

We wanted to study big technology companies and how people feel about them broadly, but weren’t exactly sure where to dig in.

A mixed methods approach

1. Run a generative, unmoderated study

We asked a smaller group (n=100) of participants a broad series of generative questions about their perceptions towards big tech—sussing out their predictions for tech and greater society, tech and government, tech and social media, and tech usage amongst children.

Participants most frequently voiced their opinions on tech companies’ trustworthiness, regulation for tech companies, and our growing societal dependence on tech. However, their usage patterns and general perceptions of tech were overwhelmingly positive.

This gave us the direction we needed to explore further and validate what we heard with a broader sample.

We ran a focused study with four unmoderated sections, focusing on the intersection of tech and society, tech and social media, tech and children, and tech and the participants personally. For each section we asked:

  • A series of multiple choice “How much do you agree or disagree with…” questions like:
    • How much do you agree or disagree that the following prediction will come true in 2019? “In 2019, tech companies will self-regulate and reduce the amount of data they collect.”
    • How much do you agree or disagree that the following prediction will come true in 2019? “In 2019, it will be easier to identify 'fake news.'”
  • Open-ended questions like:
    • Which do you think is most likely to come true, and why do you think this is most likely? How would you feel if it did come true?
  • A video response question:
    • Now, we want you to come up with your OWN prediction. How do you think digital technology will affect SOCIETY in 2019? In a 60-second video, tell us your prediction, why you think it’s likely to come true, and how you feel about it. Start the video with “In 2019…..”

2. Build out quantitative survey to explore the findings

We wrote a quantitative survey mirroring the generative study and shared with a broader audience (n=1,000) to validate what we saw. We saw an even greater trend towards positivity—with a sizable majority of Americans feeling positive about big tech and social media.

At this point, we had more questions—now about our sample composition.

3. Re-run a quantitative study to validate the sample

Because we ran the study remotely, we realized we’d surveyed internet-savvy households. We assumed that they would be more excited about tech. So we went back to a probability-based sample with non-internet households. We looked at rural, non-internet users and looked at how it impacted our findings. In the end we had a balanced sample with demographics split between rural, urban, and suburban populations, all political affiliations, and all major ages, ethnicities, and income brackets. Our results barely changed.

4. Conduct more generative research for added perspectives

We understood “what” people thought about big tech from our probability-based samples—we wanted more context as to why. So we picked a smaller group from the same survey pool (n=100) again and asked them to elaborate on their responses. Some participants poked holes in the assumptions we made from the quant data.

To learn why participants were so optimistic, we asked:

  • A series of open-ended questions asking them to elaborate on their responses, like:
    • Why do you think it’s generally good for our country? In a few sentences, give us some examples of what makes digital tech mostly a good thing.
    • Why do you think it’s generally good for you personally? In a few sentences, give us some examples of what makes digital tech mostly a good thing for YOU personally.
  • A video response question:
    • Despite these bumps in the road, why are you still optimistic about digital technology? What good do you think these large companies do for our country? Tell us in a 60-second selfie video.

Back to top


How to analyze mixed methods research

There are a few ways to dive into the data that are particularly well-suited for multiple-source insights.

Some tips based on when you plan to start analyzing

If all of your data are already collected, consider starting with quantitative analysis. Use it to generate top-line, broader paths down which you can proceed if and when you begin digging into the more open-ended, qualitative analysis. Knowing something about the nature of the phenomena you're investigating can speed up or sharpen your open-ended theme building.

If you’re working on an agile or iterative team, there’s more flexibility to leverage insights from one method to inform the creation and launching of your next method. For example, ten interviews might turn up five themes about product usage that you'd like to explore at a higher, scaled-up level with a quantitative survey questionnaire.

If you’re on a tight timeline, then getting the data in is your first primary objective. To ease the analysis and synthesis processes, start observing your data as it rolls in. Bookmark moments for later review, lightly tag standout submissions or standout moments in interviews. This both starts to “clean” the data as it’s coming in and gets you familiar with it. That way, when it’s time to make sense of it as a collection, you’ve already got a head start.

Some advice based on how you plan to analyze

Broadly speaking, there are two approaches to mixed-methods research analysis: top-down or bottom-up.

✔ Top-down analysis

The quantitative data from surveys, reports, and back-end analyses are used to inform, guardrail, or sharpen the open-ended analysis. It offers some signposts or flags to look for when beginning the qualitative analysis. For example, if survey data show that account creation is a frequently reported pain-point, you might start with interviews where that aspect of the process came up to surface specific customer quotes or examples of that friction.

✔ Bottom-up analysis

As you’d expect, bottom-up analysis takes the opposite approach. Here, starting with the open-ended data, generating themes, and employing grounded theory helps surface meaningful concepts to gut-check with wider audiences, be those customers or a general public. For example, if you find five discrete personas from a large set of interviews, a bottoms-up approach would have you check the frequency of these personas or segments from the existing or would-be customer base by using survey questions designed to categorize. The survey is used to confirm or clarify the analyses of the open-ended data.

Back to top


Want more advice on analysis? Try: Foolproof Qualitative Analysis Tactics—For Whether You Have a Month or an Afternoon


Mixed deliverables: Effectively sharing mixed methods research

What good is a mixed or hybrid approach if the readouts, reports, decks, or whatever the deliverable isn't treated with the same approach? Diverse data streams make for more influential and persuasive share outs. Videos representing themes paired with quant frequencies? Giant sampled inferential stats readouts with punchy quotes from interviews?

By strategically weaving together the streams of data collected by mixed methods, the deliverables should offer more of the story.

A few inventive tactics that lend themselves to compelling mixed methods research share-outs:

✔ Research “museums”

Print photos and charts, play videos, and include prompts structured as mini “exhibits.”

✔ Get visual

Qual data lends itself well to comics, storyboards, and if you're working with video, "movie nights." Back up those visual snapshots with numerical insights on the prevalence for bonus impact.

✔ Share as the research rolls in

If you’re conducting interviews, invite observers to sessions and include them in post-interview debriefs. If you’re including a survey, share the submissions as soon as you have results (rather than waiting until the end of the project). Pass on choice video clips as they come in via Slack, then share “bullet point” updates regularly as the study progresses. Keeping folks in the loop introduces them to the rigor and impact of your findings on the whole.

Back to top


Check out our article on effective research share outs: How to Present Your Research So That Stakeholders Take Notice and Take Action


Summing it up

Hybridizing your methods usually takes the best effects of each tool (e.g., survey's scale, interview's richness, ethnography's positionality) and combines them to create a fuller, truer, more actionable picture. Mixing methods also helps bolster outcomes by accounting for the weaknesses in any one method or approach.

With the growth in remote and digital research platforms, it's less daunting to combine approaches. Even if it's a "smaller" sample of several hundred being added to a deeper in-home study, the mixing of methods allows deliverables to speak to wider audiences of stakeholders, increasing the likelihood that recommendations are executed.

Finally, with demand for user insights ever rising, mixing methods can serve as a site for weaving in new teammates and collaborators, whose skills might lend themselves to different approaches. For example, designers working on cultural probes or marketing and sales taking up initial survey design. Creating a team-based approach to research increases the visibility of human-centeredness and can engender more empathy for the customers or users throughout the organization.

Back to top

You may also like...

Ben has a doctorate in communication studies from Arizona State University, studying “nonverbal courtship signals”, a.k.a. flirting. No, he doesn’t have dating advice for you.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest