April 8, 2025
April 8, 2025
When I first started doing user interviews, I created a long list of questions to ask my participants. Honestly, the list looked more like I was ready to enter an interrogation than an interview.
And sometimes, that's what it felt like. I would ask question after question, reading down my list, to try and get through all of the questions in the allocated 30, 45, or 60 minutes I had with the person.
I would come out of the interview feeling okay, especially if I got to all my questions.
But then the road started to get bumpy. When I got to analysis, I realized that I had some answers to my questions, but they were shallow answers—some yes's or no's sprinkled with a limited understanding of why. These answers would never amount to insights, let alone ones that were actionable by my team.
I started to dread interviews and the subsequent analysis that followed. My questions felt lackluster and useless, but I didn't know how to ask them differently.
From the outside, it seemed like I was asking all the right questions, but it was far from that. The answers I received were not only shallow, but many were unreliable.
It wasn't until later in my career that I learned the superpower of qualitative research and interviews: stories. Rich data comes from stories, actual recollections of a participant's encounter doing something, rather than a yes or no or a list of all the worst moments they've had.
My question style changed once I understood that stories were crucial for getting deep and valuable insights. And with that, my entire approach to interviews shifted.
This approach to research gave my teams more profound findings and insights, leading to action and impact. Moving away from answering yes/no questions toward strategically understanding users was also hugely fulfilling. My research went from project-based to organizational impact.
How do we go about making that shift?
First and foremost, design and research goals for your projects are critical. By thinking through and creating a shared understanding of research goals, we ensure that we focus the project on the information we need to gather.
Without goals, a project can go rogue. You could ask different participants different questions, get lost down a random path, and then end up with 10 completely different interviews. I've been there, and it’s incredibly disheartening.
Besides paving the way for focus, goals accomplish even more: they help us craft relevant interview questions.
Whenever I was beginning to get a hold of this new way of forming questions, I used my goals to inform my interview questions.
Let's say we work at a company that sells HR software, and we're trying to understand how companies choose their HR software.
Our goals might look like this:
Based on this, I formulate interview questions to help me answer these goals. At the end of the project, I want to go to my team and give them the information that helps them understand these goals better.
So, how do we formulate interview questions?
Unfortunately, there is no one-stop formula for excellent interview questions. If someone could figure that out, they would become famous.
However, there are a few things you can do to help ensure your questions lead to rich stories and in-depth qualitative data.
TEDW is a wonderful framework that helps you avoid yes/no questions and gives you phrasing that opens up your questions.
The acronym stands for:
The wording of these questions prompts participants to give extra detail. Let's look at the difference between a question I commonly see asked versus a TEDW question:
"What are some frustrations you've encountered while using our product?"
There are a few things that aren't ideal with this question:
So, let's change it: "Describe the last time you experienced a frustration with our product. What happened?"
Why is this a better version?
V1: "When was the last time you used the product?"
Improved: "Walk me through the last time you used the product, starting from the beginning?"
V1: "How did you decide you needed HR software?"
Improved: "Talk me through your decision-making process when finding HR software, starting from when you felt you needed it?"
V1: "What did you feel during the demo?"
Improved: "Explain what you were feeling during the demo."
We want to make our language as neutral as possible and avoid leading the participant toward a particular feeling or answer whenever possible. Although this isn't always possible, we should strive toward it.
When we combine these with TEDW statements, the result is a great unbiased question.
V1: "What do you like about the product?"
Improved: "Describe how you feel about the product."
By taking out the subjectivity ("like"), we allow participants to assign the most pertinent feeling. By swapping out "what do you" with "describe how," we ask for more specificity and depth.
V1: "What makes this product helpful?"
Improved: "Describe how this product impacts your day-to-day life."
V1: "What do you expect to gain from using this product?"
Improved: "Explain the top reason you use this product."
Sometimes neutralizing your language isn't possible (see examples below), but do your best!
One of my biggest pet peeves is seeing future-based questions in interview guides. Humans are notoriously horrendous at predicting the future—especially when it has to do with using credit cards. The best way to understand future behavior is to base it on the past.
Instead of trying to get people to predict how they might feel, what they might use, and what they might purchase, we can look to the past to form our questions—using open-ended language from above.
V1: "Would you buy this?"
Improved: "Describe why you bought something similar to this in the past?"
V1: "Would you use this feature?"
Improved: "Describe a time you used something similar."
V1: "What would you improve?"
Improved: "Tell me about a better tool/experience and why it’s better."
There are always competitors or other ways people have completed tasks in the past, so even when conducting innovative research, there is always something to compare to!
Usually, people try to focus on the positives, but user research and design is one of those industries that focuses on the negative. Although we want to neutralize our language as much as possible, sometimes we need to ask about specifics. However, they should be negatives.
Knowing what people like about our product or what is good about it isn't that helpful. Improvement and action come from painful experiences or what's going wrong. Since we have limited time with participants, I recommend focusing more on what isn't working on the product versus what is.
V1: "What do you like about this page?"
Improved: "Describe the most confusing part of this page."
V1: "How clear was the information on this page?"
Improved: "Explain what information is missing on this page."
V1: "What do you find easy about the task you just performed?"
Improved: "Explain what felt the most difficult about that task."
Improved: "Explain how you felt after completing that task."
The saying "practice makes perfect" is somewhat legitimate, but we want to ensure you practice the proper techniques to perfect your craft! Try to be intentional and thoughtful when creating interview questions to understand if each meets your goal and is as open-ended and objective as possible!