Skip to content
Ideas

Mix Up Fresh Studies with These 4 Uncommon UXR Methods

Using the same methods over and over again can make your research methods and results dull. Consider some new studies to mix it up.

Words by Nikki Anderson-Stanier, Visuals by Nicky Mazur

It’s easy to get stuck in the rut of usability tests and one-on-one interviews. There’s a reason they’re a fan favorite. These methods help us answer a lot of common questions teams ask us.

However, cycling between the two can become dull and unfulfilling. Or, maybe you've encountered a question that these methods can't answer. What then? It's time to try some new methods!

And I don't just mean diary studies or Jobs to be Done, but some obscure techniques. When I first tried some of these methods, I was pretty nervous — but it was worth it. Even if they weren't perfect, I learned something new that I continued iterating on. Being a researcher is all about being flexible and adaptable.

So, what are some of these obscure methods? Let's check them out.

Jump to:

  1. FIDO
  2. Run of Post
  3. Cloze test
  4. True intent study

1. FIDO

FIDO stands for Freehand Interactive Design Online, a method created by Fidelity. FIDO is a framework for participatory design. Participatory design helps us understand participants' needs, goals, and expectations by bringing the participant into the creation process.

The point of FIDO is to add an extra step between conceptual design and prototypes. This allows participants to give feedback before moving forward with usability testing. For this method, you take a concept or early design you’re working on and break it into different components that participants can then use to create their ideal design.

When to use FIDO

Since FIDO is a participatory design framework, it can help with the following objectives:

  • Exploring initial concepts with participants before creating prototypes
  • Understanding what people perceive they need within a design
  • Discovering how people feel about elements or components within a design (ex: what components are useful/useless, clear/confusing)

How to run a session

Step 1

Choose an early-stage concept or design you’re working on that you’d like feedback on.

Step 2

Break the concept into all its separate components and elements. For example, if you are creating a new idea for an e-commerce website, separate all the navigation components, sidebar, search, etc.

Step 3

If you want, you can even add in competitors' components that you don't have within your current product or idea to understand what participants think of them.

Step 4

Put each component on a notecard or virtual post-it. Bonus points if you make the note cards magnetic.

Step 5

Depending on your product, you can hang (or put into Miro) a canvas with either a browser window or platform frame. Ideally, take away all colors from the canvas and the components.

Invite participants in (1x1 sessions) and ask them to build their ideal design using any elements they want — as few or as many as they need.

Step 6

After the participant completes the design, discuss why they used those components and the thought process behind their decisions.

Step 7

To analyze, look at similarities and differences between participants' ideal designs, the percentage of elements used from each product/site (if you used competitors), how often participants chose each component, and the most popular components used.

Remember that we use FIDO to build a design and understand participants' perceptions and reactions to components. The next step would be usability testing to understand if this design/experience works.

Back to top

2. Run of Post

Caroline Jarret invented Run of Post (and also wrote the fantastic book, Surveys That Work). A Run of Post study examines the off-platform digital communication a specific user group gets from your organization over a reasonable amount of time to see if they make sense.

When to use Run of Post

Run of Post is excellent in helping you understand how and what you’re communicating to certain user groups and if it makes sense. Usually, communication is fragmented across different teams, and Run of Post can give you visibility into communication strategy across an organization to highlight inconsistencies, gaps, or missed opportunities.

Although Run of Post is intended for off-platform communication, I’ve also used it to understand the connection between on-platform interactions and communication. For example, if a participant group interacts with only sale-based items on my product, do we use that to communicate sales to this group better?

How to conduct a Run of Post

Step 1

Decide on a user group you want to focus on. This can be a particular segment or persona. It's essential to choose one specific group as opposed to "all communication, with all users" because it will be challenging to analyze and make recommendations.

Step 2

Choose an appropriate amount of time. For example, if you communicate with the user group only once a month, you might want to look at the past three or even six months of communication. However, if you’re sending emails or other forms of communication multiple times a week, you could use two weeks or a month's worth of communication.

Step 3

Gather all the communication from the determined time into a Miro board or by printing it out and hanging it up.

Step 4

Analyze the content by looking for any discrepancies in the communication. What doesn't make sense together? Is there a story or narrative? Are you missing opportunities to communicate certain topics or ideas to this audience?

Step 5

Create a strategy or plan to implement changes and continue to monitor the communication every quarter or half-year.

Back to top

3. Cloze test

A cloze test is a fancy way of describing what some of us might remember as "Mad Libs." In this test, you take a text sample, remove specific words, and ask participants to fill in what they believe the missing words are. Through this test, participants must rely on the context and their knowledge of your product.

When to use cloze tests

Cloze tests are great for determining how appropriate and understandable text is for your audience. These tests are beneficial when dealing with highly complex topics, such as legal or healthcare information. However, you can still use a cloze test to assess the understandability of any website.

How to run a cloze test

Step 1

Take a bit of text from your website, about 250 words. Use text you aren't sure about, or that people have had problems with in the past. For example, if customer support gets many calls on how your product works, test the text currently on your website about that subject.

Step 2

Once you choose the text, take out every fifth word and replace it with a blank space. You will ideally have around 25 blanks in your text and no more than 50. Mad Libs can be cognitively exhausting!

Step 3

Ask participants to fill in the blank spaces with the word they believe should be used.

Step 4

To score the test, count the number of correct answers and divide that by the total blank words. Then, turn this number into a percentage. For example, if I had 25 total blanks and a participant got 15 right, they would get a score of 60%.

After scoring, you can use the following benchmarks:

  • 60% or higher indicates the text makes sense to the audience.
  • 40% - 60% suggests the reader might have difficulty understanding the text.
  • Under 40% indicates that the text is not well understood, and people will struggle to process it.

Want more ideas on how to test content? Check out this article!

Back to top

4. True intent study

A true intent study is a type of intercept survey. You know those pop-ups you encounter on websites when you go to leave (they sometimes include NPS scores)? Or hover over a particular area? Yup. That's an intercept survey.

A true intent study intercepts a visitor to your product to ask them questions about their experience, needs, pain points, or goals. We use true intent studies to understand who is visiting our product and if they can accomplish their tasks and goals or how their experience was.

When to use a true intent study

True intent studies are great for quickly collecting data on:

  • Who is visiting and using your product
  • What people are trying to do on your product
  • Whether people can accomplish their tasks/goals on your product
  • How people might improve their current experience

A successful true intent study needs many responses to analyze properly. If you don't have a lot of traffic coming to your product, this might not be the best method. Also, keep the open-ended questions to a minimum, as people are less likely to answer them. And keep the survey short, ideally one to three questions.

How to run a true intent study

Step 1

Choose what questions you want to have answered and the area of your product you’re especially interested in. For example, let's say you have a checkout funnel and are interested in learning more about people abandoning their cart.

Other questions you could ask include:

  1. What are the main reasons for visiting/using the product? (Multiple choice)
  2. What are they trying to do today? (Multiple choice)
  3. How was their experience today? (Scale)
  4. Could they accomplish what they were trying to do today? (Yes/No)
  5. What difficulties did they encounter? (Open-ended or multiple choice)
  6. How would they improve the experience? (Open-ended)

Step 2

Consider also including usability metrics or scales, such as ease of use, Single Ease Questionnaire, confidence, etc.

Step 3

Decide on no more than three questions to ask and decide on when/how the survey will intercept them, such as:

  1. When they try to exit the website/app
  2. When they hover over a certain area
  3. After a certain amount of time on a page

Step 4

Determine the amount of time the survey should run to collect the necessary amount of responses.

Step 5

Put the survey on your product and collect responses in your given timeframe (ideally, over 100 replies per intercept).

Step 6

Analyze the responses by either:

  • Looking at the percentage of people who came to do specific tasks
  • Evaluating the average experience of people
  • Finding any similarities in the open-ended questions
Back to top

Although it’s easy to get stuck in using the same methods, there are plenty of others out there we can try. The next time a study calls for answering any of these questions, try applying these methods! The only way to get better at these methods is to practice, so always remember to conduct a dry run or two to start.

Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs. 


To get even more UXR nuggets, check out her user research membershipfollow her on LinkedIn, or subscribe to her Substack.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest