Skip to content
Ideas

Foolproof Qualitative Analysis Tactics—For Whether You Have a Month or an Afternoon

Use these tactics to analyze for findings that "fit"—even when your timeline doesn't suit the study.

Words by Kyli Herzberg, Lindsey Brinkworth, Karen Eisenhauer, and Ben Wiedmaier; Visuals by Allison Corr

Well the good news is, you have a lot of data.

The bad news is, you have a lot of data.

The longer, richer, and more nuanced your study, the more time you’ll want to spend immersed in your results. You may have recordings, videos, and photos to sift through. You may have a mountain of open-ended responses to make sense of. And you may have a tiny window of time to wrap your head around it all.

We picked our in-house researchers' brains for tactics they use to do analysis well—on ideal, and less than ideal, timelines.

Here’s their best advice for drawing your “best-fit” conclusions, whether you have a month, or just an afternoon, to do so.

Jump to:

Why having (more) time matters

When we’re asked to do good, strategic research in the context of shifting, agile product decisions—we might have to make some compromises.

But when you can make the case for more analysis time—you should.

A few reasons why extra hours (and eyes!) on your data can be critical to your study’s success:

✔ More honesty with and in the data

True grounded theory, “bottom-up,” approaches to analysis requires deep familiarity with the data—which more time affords. Those outliers you may not have time to explore could be the key to your next market-shaping innovation. Taking the time to actually look at them can be game-changing.

✔ More gut- and cross-checks

Is what you're seeing really there? With more time, you can move beyond your hunches and pressure-test—asking colleagues and stakeholders to check the reliability of what you're observing. While we all want to avoid having “too many cooks,” another set of eyes (and brains) can't hurt with what is a subjective, perceptual activity.

✔ More space to fail

When we're strapped for time, the pressure to get your analysis and synthesis "right" the first time is high. Having more time gives you the space to play with frameworks, tag lists, and themes. Like gut-checking, more time means you can test your hunches on real data, not just putting them in a deck to see if they look nice.

✔ More nuanced understanding

To "understand" the whole user, you need time. Your participants think about more—a lot more—than just your brand, experience, or product. Effective qualitative research considers the whole human. This is especially true when faced with a multitude of different modes of data: open-end, scale, and rich media like videos.

But when your analysis window is tight, you’re likely to fly past that meaty “who-they-are-data,” and scour for the flashy “clear-business-impact-data.” When you do good research, but leave those less-shiny rocks uncovered, who knows what critical insights you might bypass?

In the end, the difference between “just enough analysis” and “plenty of
time” analysis is like the difference between a tailored jacket and a
department store jacket.

    When you grab a jacket off the rack, it might fit a little awkwardly. You might feel a little uncomfortable, but ultimately, you can wear it just fine. When you get a jacket custom made—it fits perfectly. Everything falls into place exactly you’d expect it to.

    When you have the time to really sit with your data—the answers you uncover for your research questions really feel right. When you’re on a time crunch, the fit may fill a little forced, but you should still get enough insight to move your team forward.

    Back to top

    Analysis when you have ample time for analysis

    The building blocks of good qual analysis are the building blocks of good qual research: immersion, collaboration, and story. Unbounded qualitative analysis gives you a chance to “go deep” on person, narrative, and empathy.

    Still, at a 10,000 foot view, any impactful, meaningful, change-producing research project needs to answer questions. Those questions should be grounded in the experience and perceptions of the human "on the other end," the user.

    The basic steps for making it happen:

    1. Immersion
    2. Tagging
    3. Framework Development
    4. Crosstab Checks
    5. Narrative Output & Deliverable

    Back to top

    When you have the time to really sit with your data—the answers you uncover for your research questions really feel right. When you’re on a time crunch, the fit may fill a little forced, but you should still get enough insight to move your team forward.

    Immersion (1-3 Weeks)

    Most analysis is hyper goal-oriented. You're seeking to confirm a suspicion or hypothesis for your stakeholders. You don't always have the time to dig into the data and move it around—and instead are left scratching above the surface.

    With more time comes more true immersion. This part of the process becomes a leisurely (but not aimless) stroll. How you immerse will largely depend on your data composition.

    Some folks stay digital: dropping video, audio, or images into a highlight reel, and letting the moments wash over them. Sometimes printing the data into a physical artefact—before reading, grouping, moving and diagramming it—can help inspire some initial hunches. Other times, you might get the best creative yield from dropping snippets, themes, and ideas into a doc or a spreadsheet.

    Either way, themes, patterns, questions, and notes about participants invariably bubble up when one has that much exposure to the data. This just isn't possible when you only have an afternoon to soak up dozens (or hundreds) of discrete moments.

    To get a strategic jumpstart, begin to filter your data. If your research tool allows for it, sort your responses by participants, question, demographics, or even date (if it's longitudinal) and again carefully examine each moment or datum, one-by-one.

    Back to top

    Tagging/coding (1-2 weeks)

    During immersion, you'll naturally begin spotting trends, patterns, and maybe even the edges of a story that helps answer your questions.

    Tagging (or coding) is where you begin to note what's happening in/with the data, and how. With more time at your disposal, tagging should take two forms: descriptive and thematic.

    Descriptive tagging is about indexing moments and noting the presence of certain variables: company names, location(s), people involved etc. A tip here: You could also create closed-ended questions, which you can later filter your data by—essentially asking the participants code the data themselves.

    Descriptive tagging is important, especially for slicing, dicing, and some exporting options such as crosstabs (which we'll cover later). When you have a time-tight window, these might be the only form of tags used.

    You know your questions, have some identifiers in mind that you're hunting for, and you tag accordingly: How many moments mention the word "pain" or "frustrated?" What's going on when users are engaging in X behavior?"

    More dynamic, nuanced, and ultimately needle-moving are the insights derived from the thematic tagging. Here, you're taking the combination of certain prompts, or even a holistic view of an entire moment to answer more complex questions:

    "What does 'family' mean to someone in this moment?" or "What emotion(s) are taking place here that might be causing that affect?"

    It’s these tags—in combination with your descriptive ones, too—that create the framework(s) for understanding, unpacking, and communicating the insights found within your data. And it is these thematic tags that take the real time. Having a generous tagging window means opportunities to refine and recalibrate tags and codes.

    Back to top

    Theme/framework development (~2 weeks)

    Framework development co-occurs with tagging and coding, and it’s where any human-centered researcher, designer, or thinker earns their stripes.

    When we develop themes and frameworks, we’re essentially taking a bundle of tags or codes and making a meaningful narrative or story out of a set of insights. This stage of the process is usually what we talk about when we discuss “synthesis.” Compared to analysis, it’s the creation and application of tags or codes (as well as notes).

    This step, much like tagging alongside it, requires time to get right. More time means more back-and-forth between teammates, stakeholders, and even yourself (as you gut-check your previous notes or thoughts).

    Here, analysts stress the importance of "pressure-testing.” By that they mean taking a semi-formed framework and throwing it at the data. What happens when the code or tag list is applied to all data? Are the tags mutually exclusive (very little to no overlap) and exhaustive (everything that can be coded, is)? If not, it's back to the drawing board for more discussion and refinement.

    This back-and-forth is where failure presents the opportunity to better situate and narrate the insights your tags are surfacing. With each iteration, the story (should) gain clarity, consistency, and produce more impactful (i.e., usable) insights.

    This is the reason qual has ascended in importance within innovative companies: rich data can—with enough time—produce rich, strategy-shifting results and insights. Ideally, the framework connects the insights wrought from the tags and codes and speaks to the impact more broadly.

    Back to top

    Crosstabs (1 Week)

    Crosstabs brings frequencies and quantitative data back into the fold. How does the tag group or framework play out across a certain demographic, segment, or profile? Does it make sense given what you've seen in the data? Are there are any surprises?

    Crosstabs serve as another check on the reliability and validity of the framework you're constructing. Importantly, surprises at this stage often produce the most meaningful insights worthy of "crown-jewel" deck slides—the ones your clients or stakeholders cling to, print out, take photos of, and share widely.

    Moreover, scrutinizing crosstabs may send you back to the data for more tagging and theme refinement—a worthy backtrack to take when you have the time.

    Back to top

    Narrative structuring (1-2 weeks)

    In what format will you tell this story?

    When we have time to analyze our data, we have time to frame it effectively. Sometimes that manifests itself in a particularly creative deliverable—like a comic, storyboard, "usability movie night," or even Choose Your Own Adventure story.

    Most often than not, though, we’ll at least be delivering a presentation and a report. Some rapid-fire rules of thumb for making these deliverables resonate:

    ✔ Strive for one insight per slide deck or page of your report

    Literal and figurative whitespace is your friend.

    ✔ Support your insights with diverse data

    For example, a layer of validating quant, a video reel, particularly resonant quotes, etc.

    ✔ Analyze your audience

    A group of executives will be moved and influenced by something quite different than a more technical, back-end-focused audience. Even the most robust, nuanced story can fall flat if not packaged appropriately for your audience.

    ✔ Entertain

    Regardless of who you’re speaking to, most sensing humans are moved by emotive pictures or video reels. Anything entertaining—like a short animation, usually has attention-grabbing, and attention-keeping power.

    ✔ Answer the question

    Your format should always hint or reflect the driving or key questions, even if the insight resulted from an outlier or unexpected finding.

    Weaving the circuitous trail back to the motivating question will keep your audience on-board and showcase the value of the time it took you and your team to create the deliverable in the first place.

      Back to top

      Analysis in an afternoon

      So say you don’t have weeks to immerse. Maybe you have a few days. Maybe you only have a few hours. Your teams need to make a call fast—and you’re scrambling to get them data that’ll help them make that call effectively.

      At this point, some companies throw their hands up, and turn back to the quant data.

      But qual is still valuable—and qual analysis is still possible.

      A few tactics you can use to tighten your insight-turnaround window—without sacrificing your insight integrity.

      ✔ Use close-ended questions for participant “self-tagging”

      Although it's a foundational part of qualitative data analysis, tagging or coding may not fit with your delivery deadline.

      If your study is longitudinal, or contains a layer of data above a one-off interview, programming closed-ended, single, or multiple-select questions into your study can be a time-saving workaround.

      In this case, rather than having to tag your data after the fact, you’ll be able to quickly filter for those moments of most interest to your stakeholders and their questions.

      Ask yourself before you start: what can be standardized in a closed-ended question? Parts of a mobile app? Locations when the moment occurs? Other factors or individuals that impact their experience?

      A one-two-punch you can use involves pairing a short open-ended question with a closed-ended one: "Who else is involved in this moment?" followed by "In a sentence or two, how was this person involved?".

      It allows your participants to more quickly capture the moment and provide context—answering the "But why!?"—around their responses.

      ✔ Start with the best-case-deliverable and build backwards

      What does your final deliverable need to look like? What kinds of data visualizations will most persuade and compel your stakeholders? Word clouds? Themes with video reels? Quotes? Jot down your top-three and reverse engineer your project design to capture these kind of data specifically.

      For example, a question like, "Summarize this moment in three adjectives" offers great word cloud fodder; "In a 30-second video, show us this step," creates a reel-ready set of videos; and, "From 1-not at all to 10-very much, how intense is this emotion?" gets you easy-to-summarize quant.

      Whatever the project, ask yourself what deliverables you want to create and then ensure your project design(s) meet those needs. It saves me time and hassle on the back-end.

      Additionally, make sure you’re being realistic with how much data you'll need and how long you'll want your research to field. Do you need four separate research activities, or will two capture the moments that are most essential?

      It's always hard to know how participants will respond to prompts and activities, but run a quick (max a few hours) trial of your project with colleagues, having them submit answers to your questions. How much do you get for each entry?

      Now imagine that multiplied by your sample and days/weeks in-field.

      Qual can take time, and often benefits from longitudinal approaches, but some projects can be just as impactful in half the window, once you work through the data you’ll get back.

      ✔ Don’t think of analysis as “the last step”

      If you're on a tight timeline, then getting the data in is your first primary objective. To ease the analysis and synthesis processes, start observing your data as it rolls in.

      Bookmark moments for later review, lightly tag standout videos or submissions, send comments and questions to participants, or even coach participants on their answers (i.e., This was great, keep up the good work!).

      This both starts to “clean” the data as it's coming in and gets you familiar with it. That way, when it's time to and make sense of it as a collection, you’ve already got a head start.

      If you have participants working through a sequence of activities/tasks—preparing to shop, visiting a store, and making a purchase, for example—hone in on a single participant's journey.

      Choose one you feel will be consistent with your results, and use them to get a pre-read for what other participants may show and say. What are the pain points, what's delighting this person, and what are they missing? You're building a list of things to be thinking about as more data roll in, giving you an edge and making your analysis sharper.

      Alternatively, you may have success exporting bulk data into a single spreadsheet—with tasks, activities, or parts copied/pasted into different tabs—and starting pulling quotes, averages, and findings from a single source.

      ✔ Take it from the top-down

      Analysis can usually take two forms. The first involves starting with hypotheses and research questions and searching for data to support, refute, and otherwise answer. The other requires embedding and immersing in the data—going "bottoms-up" and allowing your findings to drive the conclusions made.

      When time is of the essence, hunting for key data points that can get you the critical answers is not only pragmatic, but time-saving.

      In practice, this means writing out your big questions, identifying the prompts or moments that should/shouldn’t help you answer these, and then filtering, digging, and looking for those moments. This might take the form of filtering on a certain closed-ended question (remember, these are life-savers!).

      For example, if you’re looking for friction or pain points, you may filter based on the question, "Is this a hit, miss, or wish moment you're showing?" and just look at the "misses."

      This is another point where your prompts and design more broadly should match your question and hypothesis needs. If you've programmed to match your needed outputs, analysis feels more intuitive, because your data just start answering the questions you have and lead toward conclusions on hypotheses.

      Back to top

      Summing it up

      Qualitative research's merit lies in the nuance, depth, and perspective it offers. That merit, importantly, is only as useful as the approach to analysis taken. Whether you have hours or weeks, digging into the data, making sense of it with themes, and linking those themes with a framework is your mission.

      With these tactics and strategies, it should be a little less daunting and a lot more enjoyable: analysis is the fun part of qualitative research. Doing it right provides you with the biggest opportunity to advocate for, tell the story of, and amplify the voice of your users.

      You may also like...

      Ben has a doctorate in communication studies from Arizona State University, studying “nonverbal courtship signals”, a.k.a. flirting. No, he doesn’t have dating advice for you.

      Subscribe To People Nerds

      A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

      The Latest