Most analysis is hyper goal-oriented. You're seeking to confirm a suspicion or hypothesis for your stakeholders. You don't always have the time to dig into the data and move it around—and instead are left scratching above the surface.
With more time comes more true immersion. This part of the process becomes a leisurely (but not aimless) stroll. How you immerse will largely depend on your data composition.
Some folks stay digital: dropping video, audio, or images into a highlight reel, and letting the moments wash over them. Sometimes printing the data into a physical artefact—before reading, grouping, moving and diagramming it—can help inspire some initial hunches. Other times, you might get the best creative yield from dropping snippets, themes, and ideas into a doc or a spreadsheet.
Either way, themes, patterns, questions, and notes about participants invariably bubble up when one has that much exposure to the data. This just isn't possible when you only have an afternoon to soak up dozens (or hundreds) of discrete moments.
To get a strategic jumpstart, begin to filter your data. If your research tool allows for it, sort your responses by participants, question, demographics, or even date (if it's longitudinal) and again carefully examine each moment or datum, one-by-one.
Back to top
During immersion, you'll naturally begin spotting trends, patterns, and maybe even the edges of a story that helps answer your questions.
Tagging (or coding) is where you begin to note what's happening in/with the data, and how. With more time at your disposal, tagging should take two forms: descriptive and thematic.
Descriptive tagging is about indexing moments and noting the presence of certain variables: company names, location(s), people involved etc. A tip here: You could also create closed-ended questions, which you can later filter your data by—essentially asking the participants code the data themselves.
Descriptive tagging is important, especially for slicing, dicing, and some exporting options such as crosstabs (which we'll cover later). When you have a time-tight window, these might be the only form of tags used. You know your questions, have some identifiers in mind that you're hunting for, and you tag accordingly: How many moments mention the word "pain" or "frustrated?" What's going on when users are engaging in X behavior?"
More dynamic, nuanced, and ultimately needle-moving are the insights derived from the thematic tagging. Here, you're taking the combination of certain prompts, or even a holistic view of an entire moment to answer more complex questions: "What does 'family' mean to someone in this moment?" or "What emotion(s) are taking place here that might be causing that affect?" It’s these tags—in combination with your descriptive ones, too—that create the framework(s) for understanding, unpacking, and communicating the insights found within your data. And it is these thematic tags that take the real time. Having a generous tagging window means opportunities to refine and recalibrate tags and codes.
Back to top
Framework development co-occurs with tagging and coding, and it’s where any human-centered researcher, designer, or thinker earns their stripes.
When we develop themes and frameworks, we’re essentially taking a bundle of tags or codes and making a meaningful narrative or story out of a set of insights. This stage of the process is usually what we talk about when we discuss “synthesis.” Compared to analysis, it’s the creation and application of tags or codes (as well as notes).
This step, much like tagging alongside it, requires time to get right. More time means more back-and-forth between teammates, stakeholders, and even yourself (as you gut-check your previous notes or thoughts).
Here, analysts stress the importance of "pressure-testing.” By that they mean taking a semi-formed framework and throwing it at the data. What happens when the code or tag list is applied to all data? Are the tags mutually exclusive (very little to no overlap) and exhaustive (everything that can be coded, is)? If not, it's back to the drawing board for more discussion and refinement.
This back-and-forth is where failure presents the opportunity to better situate and narrate the insights your tags are surfacing. With each iteration, the story (should) gain clarity, consistency, and produce more impactful (i.e., usable) insights. This is the reason qual has ascended in importance within innovative companies: rich data can—with enough time—produce rich, strategy-shifting results and insights. Ideally, the framework connects the insights wrought from the tags and codes and speaks to the impact more broadly.
Back to top
Crosstabs brings frequencies and quantitative data back into the fold. How does the tag group or framework play out across a certain demographic, segment, or profile? Does it make sense given what you've seen in the data? Are there are any surprises? Crosstabs serve as another check on the reliability and validity of the framework you're constructing. Importantly, surprises at this stage often produce the most meaningful insights worthy of "crown-jewel" deck slides—the ones your clients or stakeholders cling to, print out, take photos of, and share widely.
Moreover, scrutinizing crosstabs may send you back to the data for more tagging and theme refinement—a worthy backtrack to take when you have the time.
Back to top
In what format will you tell this story?
When we have time to analyze our data, we have time to frame it effectively. Sometimes that manifests itself in a particularly creative deliverable—like a comic, storyboard, "usability movie night," or even Choose Your Own Adventure story.
Most often than not, though, we’ll at least be delivering a presentation and a report. Some rapid-fire rules of thumb for making these deliverables resonate:
- Strive for one insight per slide deck or page of your report. Literal and figurative whitespace is your friend.
- Support your insights with diverse data. For example, a layer of validating quant, a video reel, particularly resonant quotes, etc.
- Analyze your audience. A group of executives will be moved and influenced by something quite different than a more technical, back-end-focused audience. Even the most robust, nuanced story can fall flat if not packaged appropriately for your audience.
- Entertain. Regardless of who you’re speaking to, most sensing humans are moved by emotive pictures or video reels. Anything entertaining—like a short animation, usually has attention-grabbing, and attention-keeping power.
- Answer the question. Your format should always hint or reflect the driving or key questions, even if the insight resulted from an outlier or unexpected finding. Weaving the circuitous trail back to the motivating question will keep your audience on-board and showcase the value of the time it took you and your team to create the deliverable in the first place.
Back to top
So say you don’t have weeks to immerse. Maybe you have a few days. Maybe you only have a few hours. Your teams need to make a call fast—and you’re scrambling to get them data that’ll help them make that call effectively.
At this point, some companies throw their hands up, and turn back to the quant data.
But qual is still valuable—and qual analysis is still possible.
A few tactics you can use to tighten your insight-turnaround window—without sacrificing your insight integrity.
Use close-ended questions for participant “self-tagging”
Although it's a foundational part of qualitative data analysis, tagging or coding may not fit with your delivery deadline.
If your study is longitudinal, or contains a layer of data above a one-off interview, programming closed-ended, single, or multiple-select questions into your study can be a time-saving workaround. In this case, rather than having to tag your data after the fact, you’ll be able to quickly filter for those moments of most interest to your stakeholders and their questions.
Ask yourself before you start: what can be standardized in a closed-ended question? Parts of a mobile app? Locations when the moment occurs? Other factors or individuals that impact their experience?
A one-two-punch you can use involves pairing a short open-ended question with a closed-ended one: "Who else is involved in this moment?" followed by "In a sentence or two, how was this person involved?". It allows your participants to more quickly capture the moment and provide context—answering the "But why!?"—around their responses.
Start with the best-case-deliverable and build backwards
What does your final deliverable need to look like? What kinds of data visualizations will most persuade and compel your stakeholders? Word clouds? Themes with video reels? Quotes? Jot down your top-three and reverse engineer your project design to capture these kind of data specifically.
For example, a question like, "Summarize this moment in three adjectives" offers great word cloud fodder; "In a 30-second video, show us this step," creates a reel-ready set of videos; and, "From 1-not at all to 10-very much, how intense is this emotion?" gets you easy-to-summarize quant.
Whatever the project, ask yourself what deliverables you want to create and then ensure your project design(s) meet those needs. It saves me time and hassle on the back-end.
Additionally, make sure you’re being realistic with how much data you'll need and how long you'll want your research to field. Do you need four separate research activities, or will two capture the moments that are most essential?
It's always hard to know how participants will respond to prompts and activities, but run a quick (max a few hours) trial of your project with colleagues, having them submit answers to your questions. How much do you get for each entry?
Now imagine that multiplied by your sample and days/weeks in-field.
Qual can take time, and often benefits from longitudinal approaches, but some projects can be just as impactful in half the window, once you work through the data you’ll get back.
Don’t think of analysis as “the last step”
If you're on a tight timeline, then getting the data in is your first primary objective. To ease the analysis and synthesis processes, start observing your data as it rolls in. Bookmark moments for later review, lightly tag standout videos or submissions, send comments and questions to participants, or even coach participants on their answers (i.e., This was great, keep up the good work!).
This both starts to “clean” the data as it's coming in and gets you familiar with it. That way, when it's time to and make sense of it as a collection, you’ve already got a head start.
If you have participants working through a sequence of activities/tasks—preparing to shop, visiting a store, and making a purchase, for example—hone in on a single participant's journey.
Choose one you feel will be consistent with your results, and use them to get a pre-read for what other participants may show and say. What are the pain points, what's delighting this person, and what are they missing? You're building a list of things to be thinking about as more data roll in, giving you an edge and making your analysis sharper.
Alternatively, you may have success exporting bulk data into a single spreadsheet—with tasks, activities, or parts copied/pasted into different tabs—and starting pulling quotes, averages, and findings from a single source.
Take it from the top-down
Analysis can usually take two forms. The first involves starting with hypotheses and research questions and searching for data to support, refute, and otherwise answer. The other requires embedding and immersing in the data—going "bottoms-up" and allowing your findings to drive the conclusions made.
When time is of the essence, hunting for key data points that can get you the critical answers is not only pragmatic, but time-saving.
In practice, this means writing out your big questions, identifying the prompts or moments that should/shouldn’t help you answer these, and then filtering, digging, and looking for those moments. This might take the form of filtering on a certain closed-ended question (remember, these are life-savers!).
For example, if you’re looking for friction or pain points, you may filter based on the question, "Is this a hit, miss, or wish moment you're showing?" and just look at the "misses."
This is another point where your prompts and design more broadly should match your question and hypothesis needs. If you've programmed to match your needed outputs, analysis feels more intuitive, because your data just start answering the questions you have and lead toward conclusions on hypotheses.
Back to top
Summing it up:
Qualitative research's merit lies in the nuance, depth, and perspective it offers. That merit, importantly, is only as useful as the approach to analysis taken. Whether you have hours or weeks, digging into the data, making sense of it with themes, and linking those themes with a framework is your mission.
With these tactics and strategies, it should be a little less daunting and a lot more enjoyable: analysis is the fun part of qualitative research. Doing it right provides you with the biggest opportunity to advocate for, tell the story of, and amplify the voice of your users.