Skip to content
Ideas

To Ask or Not to Ask? How to Design Research for Difficult Questions

Tips for matching the question to the method and identifying which inquiries can’t be researched.

Words by Nikki Anderson, Visuals by Thumy Phan

When I first started getting research requests from colleagues, I was elated. They were finally seeing the value that research could bring them, and they were hopping on board.

I was getting an influx of intake documents and was planning how to tackle the requests. The questions ranged from complex to simple, and I wanted to address them all. The only problem was—I wasn't sure how to answer some of them.

Several studies came my way, ranging from preference between two designs, whether people could understand a concept, or how many people would like X or Y features.

I tried, and tried, and tried to uncover these answers through my usual methods. But oftentimes my insights fell short, or I felt uncomfortable reporting what I had found.

Yes, five people liked the feature, and two didn't, but I had no idea if anyone would use the new feature once it was out. It felt as though for every test for preference I ran, the results turned up fifty-fifty with four people liking the new design and three people hating it.

Not being able to answer my colleague's questions had a detrimental impact on my confidence as a researcher. I shied away from vague questions I was uncertain about and began to doubt my skills as a researcher.

Imposter syndrome settled onto the couch inside my mind, with a steamy hot chocolate and no intention of leaving.

So before you fall into this trap, it’s important to note...

Not all questions can be answered with research (and that’s okay!)

After my self-esteem took a hit, I realized I wasn't doing any justice to myself or others by shirking my responsibilities as a researcher.

Yes, I felt good conducting my generative research and facilitating workshops, but I couldn't sit in that bubble forever—I had to face my insecurity. My teams were struggling because I couldn't give them direction, so I set out to find a solution.

I turned to my best friend, Google, and started scouring the internet to find others in similar positions. I read all about research questions and went back to the basics.

What are research questions? How do we form them? From a qualitative perspective, what questions can we answer, and what questions can't we answer?

Side note: I later expanded my search into quantitative data and one book, in particular, was a huge help, Qualitative Research Design by Joseph A. Maxwell.

After my research, I quickly, and sadly, realized that solution might be, sometimes, saying no.

There are questions out there that qualitative, and even quantitative, research simply cannot answer and we need to let our teams know when a question is unanswerable.

When we can't answer a question with user research, it doesn't mean we can't do anything, and we are forever stuck. Instead, we can brainstorm other questions we can answer.

For example, I see a lot of questions on preference testing designs. My teams would come to me asking to compare two or three different prototypes across a small sample size.

For a while, I did this. I asked people what they thought and how they would compare or rank the prototypes. Usually, there was a split in "preference," or people couldn't choose and, instead, mashed certain aspects of the prototypes together to create a Frankenstein prototype.

We can't distill preference with small sample sizes. We should be looking at usability, not preference because no one will use a new design and experience without proper usability.

Whenever teams came to me asking for preference, I explained that, instead, we would focus on usability. Additionally, if they asked about small changes, I would point them toward A/B tests.

In this experience, I learned the lesson of "No, but..." and it was one of the most powerful lessons I ever encountered.

Categorizing question types

Oftentimes, teams approached me with questions that were, in fact, answerable by research but not always within the realm of usability tests and interviews.

As a result, I made many mistakes incorrectly scoping studies, trying to answer questions with methods that made no sense, causing a considerable headache for myself and my teams. So, I started to categorize the different questions types I received and how to best answer them.

The quantitative bucket

Whenever I see questions starting with:

  • "What..."
  • "How many people..."
  • "Can people..."
  • "Which competitors..."
  • "Does this..."

I put them into a specific bucket. That bucket requires a larger sample size because you are trying to generalize to a larger population. When we need to quantify trends (such as behavior), we turn toward quantitative research methods.

For example, a request I received often was, "What are the reasons people use our app?" Once I see the word "what" I know I need to ask this question across a larger sample size.

When we ask for a widespread attitude, we can't just ask five people and generalize it to all of our app users. In this situation, we need to get more data so I would send out a survey.

Another common question is, "Can people understand/navigate our new design?" This question was tricky for me. I used my standard usability testing for a long time, but I never felt certain presenting my results from a small sample size.

Yes, the people I recruited could navigate the prototype, but could others? After some time, I discovered other ways that felt more reliable and valid. For this question, I turn toward unmoderated tree testing or looking at product analytics (for a current design).

Methods for the quantitative bucket:

The qualitative bucket

Now comes my favorite part, qualitative research! When teams come to you with questions with the word "why" you are heading into qualitative research territory.

The challenging part here is that not all colleagues will write "why" questions with the word why. This discrepancy means we sometimes have to pick the why out of a subtle question. So, I listen for questions about:

  • Feelings i.e. how do people feel about something?
  • Attitude i.e. what are people's attitudes about a topic?
  • Processes i.e. what journey are people going through?
  • Mental models i.e. how do people think about an idea/concept?

Methods for the qualitative bucket:

Mixed-methods bucket

As our industry evolves, I see more of a push for mixed-methods research which is exciting. Qualitative and quantitative research used to be on opposite ends of the ring, trying to prove the other wrong or less important.

We are finally bringing these two extremes together to inform teams better and answer more complex questions.

"How" questions always confused me because I felt that I could use either qualitative or quantitative research, depending on the question. But now, I get giddy when I see a "how" question come along because that tends to indicate a mixed-methods approach. I no longer have to choose one or the other.

Some good examples of mixed-methods "how" questions are:

  • "How are people using this or that feature?"
  • "How can we improve the app/product/service?"

Two ways of exploring the mixed-methods approach:

  • Exploratory sequential design: This approach starts with qualitative research. Once the study is complete (or when you start seeing trends part way through), you look at quantitative data to help confirm or invalidate the qualitative research through product analytics or a follow-up survey.
  • Explanatory sequential design: In this scenario, you start with quantitative data, either with what you already have (product analytics, reviews, metrics) or by sending out a survey. Once you see significant trends in the survey, you can dive deeper with qualitative research to uncover the "why."

As a disclaimer, some organizations don't have enough users yet to hit a significant sample size with quantitative research. In these situations, I have answered "how" questions with a qualitative study and flagged a necessary follow-up when applicable.

Educating your teams

It took me a long time to feel comfortable with these buckets. They didn't come naturally to me and I formed them over time to help guide myself when teams approached me.

When I started to explore this idea, I was still doing a lot of translating. I took the questions colleagues gave me and tried to turn them into something that would fit into these buckets or go back with my "no, but..." method.

After some time, I decided to teach my teams the art of question writing and the different buckets. I held a few workshops, presenting them with the buckets and practicing by placing questions into them (including an unanswerable one!). Finally, they wrote some of their questions and put them into buckets as well.

Eventually, the majority of my intake documents categorized research questions. Of course, there were still some unanswerables and business questions that slid through, but my teams were more thoughtful about the way they proposed a study.

Not only had I reduced my complexity, but I had helped my colleagues ensure they were asking the best questions to get the data they needed most.

Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs. 


To get even more UXR nuggets, check out her user research membershipfollow her on LinkedIn, or subscribe to her Substack.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest