Group interviews or focus groups are often a poor fit for a project. But when your users' real-world environment calls for a group setting, this method is a powerful tool.
Words by Laurel Brown, Visuals by Danbee Kim
A few years ago I was tasked with designing a playtest to assess the user experience of a vintage couch co-op video game. The publisher was considering a remaster of this old classic and wanted to do some exploratory testing to see if there were any wrinkles the new version could iron out or highlights that could be further enhanced.
I was thrilled to take on this challenge. In general, I am a fan of old things: vintage clothing, antiques, and old technology. After the initial excitement of having an opportunity to combine my work with my love of all things old, I realized the design and setup of this test was going to be quite different from the many tests I had conducted thus far, and not just because it was an old game.
Up to this point in my career, I had mainly focused on researching single-player experiences, with the occasional remote co-op session. I hadn’t yet tested a couch co-op game though. I realized this might be a use case for some version of a method I had long dismissed as useless: the focus group.
The issue with focus groups
Focus groups are a divisive concept among researchers. In Erika Hall’s Just Enough Research she specifically has a section titled, “Focus Groups: Just Say No.”
The main reason this method gets a bad rap is that humans simply act differently in groups than they do one on one. Our behavior changes when we are around others.
In a group, we tend to rock the boat as little as possible. We want harmony. We might agree to things we don’t really agree with.
The risk of the bandwagon effect arises in groups. Most people don’t want to be the odd one out, so they assimilate. There is also the risk of courtesy bias: people in a group generally want to be agreeable. Folks may be more inclined to give you their honest opinion one-on-one, rather than expressing a potentially unpopular opinion among others.
These factors can lead to biased, inaccurate findings. When testing single-player games, I always ask participants not to discuss the game or help one another because I want their individual feedback.
It’s important to know where and how they struggle individually, so we can create solutions to mitigate those unforeseen pain points. Conducting a focus group for a single-player experience would unnecessarily increase the risk of gathering flawed data. One player might give away the answer to a puzzle another player struggles to solve, and I would lose a potentially helpful data point.
While focus groups are oftentimes a poor fit, I do think group interviews have specific use cases. Here is how I differentiate the two: In my estimation, focus groups are unnatural and sterile. Strangers are thrown into a strange space and expected to be completely honest.
What I describe as group interviews put participants in a real-world-like situation in which they are free to talk aloud about their experiences. They are conversational rather than rigid but maintain ecological validity as much as possible.
Understanding use cases
Before choosing a group interview, ask yourself, “What does the group offer that speaking directly to one person does not?” If the answer is efficiency, you need to rethink your study design.
If you are conducting user research on a product that is generally used by one person at a time (not in a social situation) you should choose a method that doesn’t put them into an unnecessary social situation.
The method you choose should be determined by what makes sense for the product you are studying. Study conditions should mimic real-world situations as closely as possible. They must be conducted with a lot of care, but if done well, the group interview can be a powerful tool.
In this case, it made sense ecologically to test with a group. If an authentic gameplay scenario involves a group of people playing and talking aloud together, it is reasonable to recreate that environment in the test session as closely as possible.
Group testing like this isn’t unique. Lots of games (lots of products, probably) make sense to test this way. My team members had previously tested karaoke games (which I was unfortunately unable to moderate) and we discussed the ecological accuracy of transforming the playtest lab into a nightclub for these tests—we ultimately decided against it.
The example I share here was my first experience with group testing. I designed and conducted the test with the help of one excellent assistant. It served as a humbling reminder that the right tool for the job might be one you thought you’d never use, and to stay open to new experiences!
Setting up the group interview
First, I had to figure out how I would technically set up everything for this study. I needed to create a couch co-op situation in my playtest lab. This included finding a copy of the vintage game, a console to play it on, working controllers, and a splitter to connect them all.
With the help of my beloved IT department, we hooked up the video from the console to the projector, set up speakers, and attached a receiver for the audio output. I also enabled recording of the gameplay video and room audio so I could go back and check what was said against my notes.
I tried my best to invite groups of friends. It stands to reason that folks who are playing a couch co-op game would be friends, right? There is a risk in inviting strangers to play a game together. What if they don’t connect? What if they don’t feel comfortable ribbing each other and competing, or what if they don’t work together to achieve a common goal?
I ended up recruiting two groups of four players. I was only able to get a couple of folks who were already friends for this test, but I got lucky with both groups! They all had incredible chemistry and spoke openly as they were playing.
Part of creating the mood of ease and comfort is the onus of the moderator. With all the risk factors in mind, I went about creating a space (both physical and psychological) that would feel comfortable, casual, and inviting for all the participants.
I set up four of the comfiest chairs I could find in a semi-circle facing a large projector screen. Nearby, players had access to a mini-fridge full of drinks and I used semi-dim lamp lights for a living room feel. I sat off to the side taking notes and trying to stay out of the way as much as possible.
Before getting into the game, I chatted with everyone and had them introduce themselves so we all didn’t feel so much like strangers. As a playtest moderator, I generally try to balance making participants feel at ease with also letting them know I’m in control of the study. Here is an example of a “script” I might use when welcoming participants to a study (although it is important not to sound scripted!):
Introduce myself and the playtest overview:
“Hi everyone. Welcome to the playtest! Has anyone ever participated in a playtest here before?”
“Welcome back to those who are returning and welcome to anyone who is here for the first time. Thank you for coming! Your feedback is incredibly valuable to us.”
Make sure everyone understands the non-disclosure agreement they have signed. Go over the basics of what this means.
“Today we will be playing [this game]. We will play until noon, when we’ll have a 1-hour lunch break. At 1pm we will all meet up in the lobby to continue playing this afternoon. [If single-player]: Please don’t discuss the game with one another during the break.”
Tell them it’s a single-player or group experience. If it is a multiplayer test, I would have told them before arrival. Pair up, get to know each other if necessary.
Set clear guidelines:
“Please help yourself to drinks and let me know if you need anything. The restroom is [wherever it is]. Please do not use your phones in the lab. You may take phone calls in the lobby. Please do not wander the hallways. Feel free to leave the building at any time, just let me know what you’re up to so we’re all on the same page.“
Explain my role and technical setup:
“My name is Laurel. I’m the person you’ve been emailing! I am a researcher here and I’ll be observing the test today. I am available [via Discord or in the room] if you have any questions or issues. I’ll be asking you to complete surveys [or other methods] at certain times throughout the test. Please let me know if anything is unclear in the surveys! We will be recording your gameplay and eye movements during this test [or any other biometrics]. Before we begin I am going to help everyone calibrate their eyes.”
“You can find the game executable on the desktop. Please open the game to the main menu and test your headphones to make sure your audio is working. Once everyone is ready we can go ahead and start playing!”
Ask if anyone has any questions. After reciting this general script hundreds of times, I generally still end with “I’m sure I’m forgetting something but that’s okay! We’ll figure it out as we go!
With this test, I leaned farther into the ease side. I spoke to the participants more like peers than subjects just to set that mood I was aiming for. After getting to know each other a bit, I explained how the test would go. I told participants I wanted them to play the game and talk aloud as they went through it. They were free to talk to me, to each other, or just say their thoughts aloud. “This is so hard!” “Ah, that was so scary!” “What is this? How do I do this?” etc.
As they spoke aloud, I quickly typed their comments in as much detail as I could. Thanks to spending my entire youth on AOL Instant Messenger, I’m an incredibly fast typist. I made sure to note who said what, and when. I did record the room audio, but I wanted to minimize the amount of time I would need to spend going through recordings to know exactly what was said.
I also noted questions I wanted to come back to when we had our breaks during the day. Before lunch, and before the end of each day, we stopped playing and I asked the participants about things they had said aloud during their playtime. “Andrea, you said at one point that the weapon didn’t feel like it was responsive enough. Can you go into more detail about what was happening there?”
The open dialogue, meticulous notes, and conversational interview style allowed participants to open up, and made the entire experience enjoyable for everyone.
Measuring the impact
Analyzing the data collected in this study was not too dissimilar from other user studies I had done before this. I synthesized the interview and survey data (in addition to talking aloud to participants, I also had them individually complete some surveys for good measure–I like to quantify where I can). I put together a report and presented key findings to stakeholders.
I left this playtest feeling happy with the way it went and confident we had gathered valuable insights from the sessions. We uncovered certain pain points to later be polished in the remaster, and highlighted areas that the players reacted positively to.
It was interesting to see what aspects of the old school game felt nostalgic vs. clunky. We wanted to maintain the essence of the old game while updating the aspects that players expect from a modern game, such as smooth and responsive controls.
The real impact of this study for me wasn’t the results I presented, but how the experience fortified my user research toolkit. This playtest reminded me to remain open to experiences, stay curious, and be willing to experiment with new methods.
I feel strongly that researchers should have a mixed tool kit, and we need to know which tool is right for the job at hand. Knowing this or that method is important, but to really level up as a researcher you need to be able to identify appropriate methods and apply them accordingly. You also need to be willing to mindfully push the boundaries of the methods you’re comfortable with.
Maintaining integrity is obviously important, but experimenting with techniques can lead you toward new ways of creatively approaching research questions, particularly when your research goals are generative and exploratory.
I maintain my view that focus groups are often not the right method, and that my group interview experience is different from a traditional focus group in all the right ways. I am, of course, open to being wrong :)
Laurel Brown is a research advisor at dscout with 5 years of experience as a games user researcher. She lives on a farm where she collects vintage clothing, makes art, and practices creative coding.
Subscribe To People Nerds
A weekly roundup of interviews, pro tips and original research designed for people who are interested in people