Skip to content

3 Focus Group Alternatives To Garner Rich, Qual Insights

Are focus groups really the gold standard to gather qual insights? Here are three alternatives for your next project. 

Words by Benjamin Wiedmaier, Visuals by Danbee Kim

Focus groups—or focus group interviews—gather between 3-12 participants for a guided conversation. Moderators achieve several outcomes with focus groups, from concept feedback and prototype evaluation to exploratory and high-level investigations around identity, attitudes, and perceptions.

The gathering of multiple folks together can spur co-creative thought, a cascading way to capture more data in a more efficient manner.

This method is often a go-to for executives and stakeholders when they need qualitative insight, and as a result, UXRs and human-centered professionals are frequently leveraging this approach.

Although the focus group interview is a powerful tool for unearthing rich insight, it has its disadvantages. Let's explore the advantages, disadvantages, and some alternatives to focus groups for today's remote-UXR moment.

Jump to a section:

Benefits of the Focus Group Interview

If your research questions hover around a sensitive issue, topic, or industry (e.g., planning for end-of-life care, investigating taboo behavior like smoking, or private behavior choice) a focus group is a way to create a safe space for similar others to share their experiences.

A person may be more likely to disclose their feelings about and experiences with something in a group of folks who have shared backgrounds, offering researchers exclusive, behind-the-scenes access to rich, empathy-building data.

In much the same way, focus groups encourage and spur creativity in responses. Humans are social creatures, and many UXRs leverage this sociability to encourage lively discussions, start co-creation sessions, and conduct impactful evaluative sessions for prototypes, concepts, hands-ons, and the like.

The cascading effect of hearing how another interprets something can inform how a participant does the same, offering a glimpse at coalescing themes more quickly than one might with segmented or single-participant work.

Because of this cascading effect, focus groups often afford researchers great efficiency in collecting data, as theme saturation is reachable with 2-5 focus groups. As long as attention and care is paid to equitable, diverse, and representative samples, researchers are able to cover a wider variety of topics (leveraging the natural conversational flows of participants) and feel more confident turning to analysis and synthesis (because of the sheer volume of data focus group interviews can produce).

Few methods are more effective at empathizing the customer to stakeholders. If an executive, manager, or other stakeholder can sit in—either visibility or invisibly—and observe, hearing customers' perceptions, experiences, and attitudes first-hand, in their own language can serve as a catalyst for impact when UXRs ultimately deliver insights.

As many human-centered thinkers know, even the most beautifully designed deliverable pales in comparison to sitting with and among the customers to hear and see as they do. As such, focus groups offer imperative opportunities to advocate for user-centered practices within and organization.

Disadvantages: Multiples of Multiplicity

Bias and Validity

Many of the non-operational disadvantages of focus group interviews involve the dynamics between participants and the moderator. Specifically, the synchronous group format can introduce invisible—and insight limiting—biases.

Some of these include:

  • Desirability bias where participants want to appear "normal" in the presence of their peers. They may mute or modify their true feelings in order to conform to what they (mis)perceive to be the "right and proper" sentiment for the context. For example, a participant may hold back negative feedback if most others in the focus group are providing positive feedback (think of the classic psychological study in which a person, knowing two lines are not the same size, says that they are when all others in a group state that they are).
  • "Good Participant" bias where a participant answers in the way they believe a researcher wants, which can be exacerbated in a public, group setting, where face is threatened (e.g., wanting to appear cordial, accommodating, and otherwise "good"). This can manifest in the opposite direction, too, whereby participants knowingly provide combative or negative feedback, even when it's not their truly held attitudes. These two can be exacerbated within small group settings.
  • Groupthink or narrowing variability is another kind of bias where quieter, more reticent, or naturally shyer participants provide data that align with other, more vocal participants'. Focus groups require vigilance and careful moderation to avoid everyone arriving at the same conclusion, thereby oversimplifying what might be a more nuanced issue. Again, variables like face threats can silence some in a group.
  • Marginalization of voices, too, can happen during focus groups, especially when diverse samples—a hallmark of "good" research—comprise one's focus group. Dominant positions in society may feel emboldened to offer feedback in a group setting; similarly, marginalized, less powerful, and/or underrepresented folks might feel stifled or unwelcomed, thereby silencing their valuable feedback.
  • Ecological validity is the extent to which a stimulus or research scenario represents its counterpart in the "real world." In other words, when a person interacts with an experience, are they typically in a group of similar users? Maybe, if they're in an online group of community and decide to make a purchase or leverage a feature. But more often than not, focus groups represent exotic and less frequently experienced situations for the average person, leading them to offer—potentially—unreliable or unrepresentative feedback. The same limitations affect the in-lab environment and are why more UXRs are turning to in-situ or contextual methods.

Operations and Moderations

Focus groups' dynamics are directly related to the quality of data they produce. A "good" focus group balances voice distribution, stays focused on the topics, while also allowing natural ebbs and flows of topics as participants surface new and unknown areas of feedback.

This is no small task. Sometimes, participants are too talkative and focus feels impossible, while other times it feels as if a moderator has to "pull" information out of everyone at every turn (which can lead to some of the biases discussed earlier).

Recruitment, especially if done for an in-person focus group, is another challenge. UXRs are often looking for broad and diverse folks to populate their focus groups. But without the time and resources to conduct a national, city-by-city focus group study, one is often left to recruit those from a company's backyard (and although useful, it's a smaller slice of the user or customer base). Show rates are notoriously tenuous, even when incentives are made attractive.

And speaking of show rates, who decides to show up for an in-person focus group can introduce a confounding variable: self-selection bias. In other words, is there some set of characteristics that impels participants who attend and participate in focus groups to do so? And do those characteristics represent a smaller slice of the user base, thereby narrowing insights?

Focus groups can be demanding to attend. UXRs are often left wondering about the composition (beyond demographics alone) of those who choose to participate As we've covered in other pieces, the outlier voices are often the ones pointing to deep matters of concern for a platform, service, or experience.

Lastly, focus groups are very challenging to conduct remotely, which is what most are transitioning to in the current climate. Whether it's over a conferencing program or a purpose-built focus group tool, the loss of nuance—paralinguistics like tone, nonverbals like body position, and even eye movement—hampers richness of the data, especially when compared to the traditional in-person focus group interview.

Transcription of audio, cutting of video, and the note-taking process can all be convoluted when conducting focus groups say nothing of the effects it has on focus group participants' willingness, candor, and quality of responses in these forums.

Alternatives to Focus Groups

One-on-one interviews

Mobile and remote tools are improving the effectiveness and impact of the classic 1:1 interview, and a series of these—when paired with a diverse, equitable, and representative sample—can create similar data to a focus group.

The transition to remote has increased the efficacy of and comfort with remote platforms for both researcher and participant, enabling this to be a go-to method for today's UXR. Paired with purpose-built features like automatic transcription, stim share, and even mobile capabilities, the remote interview can still offer candor and exclusivity, while respecting the time and safety of participants and researchers alike.

To get started: Consider who you'd want to recruit for your "best case" focus group, and attempt to recruit each of those folks to a 1:1 interview. Before starting, practice with a stakeholder or colleague to ensure the right mix of structure (to answer the questions at hand) and flexibility (to allow for creative wonderings that make focus groups so powerful).

Leverage social media and/or other group channels

Go where your customers naturally congregate and perform a digital observation study, a la an ethnographer or anthropologist. Are you customers meeting on Facebook, Slack communities, or in customer feedback forums? Meet them where they are and observe.

If you need more control over the conversation, invite them to a synchronous, text based forum anchored to a few questions. Folks can post updates, respond to others' feedback, and upvote in many of the same ways they would in a traditional focus group setting.

To get started: Spend some time investigating where customers hang out, and determine the smartest (and ethical—be mindful of extractive research practices) approach to solicit feedback. Very often, customers are happy to provide responses, and this could be translated into smaller-scale mediated focus groups.

Mobile contextual study

Focus groups often lack the context behind the feedback folks offer, so why not capture the feedback in the moments when it naturally surfaces? Remote user research is booming and with it, powerful platforms to recruit, collect, cull, and analyze meaningful moments from customers.

Although asynchronous, remote contextual or diary studies are flexible enough to capture video and open/closed-ended data that, together, create a triangulatory power only mixed methods approaches can provide.

To get started: Hone in on what you "need" to know, and then think what it might be nice to see. In-situ studies can be very low-tech (like leveraging daily email or tech check-ins) or more robust (with multiple activities and question sets). Anchor your work to a media prompt like a short video and, through multiple responses, watch as the data grow into a narrative form akin to those collected in focus groups.


Leveraging focus groups can be a lot like wading through a minefield. Though you could end up at your goal destination and garner good insights, there’s a good chance that something can go wrong.

While they’re a tried and tested way to attain qualitative insights while effectively empathizing the user to the stakeholders, focus groups are also liable to get you biased results. Add to that the difficulties of running effective focus groups during quarantine and you got a recipe for a lot of headaches and few insights.

Instead, try an alternative. One-on-one interviews, social media and group channels, and mobile contextual studies are great ways to unearth the same qualitative results.

Ben is the product evangelist at dscout, where he spreads the “good news” of contextual research, helps customers understand how to get the most from dscout, and impersonates everyone in the office. He has a doctorate in communication studies from Arizona State University, studying “nonverbal courtship signals”, a.k.a. flirting. No, he doesn’t have dating advice for you.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest