People Nerds

How to Run Benchmarking Studies in Dscout

February 11, 2026

overview

Learn how to run a benchmarking study in Dscout—from scoping your focus to analyzing results and identifying areas for improvement.

Contributors

Claire Ruggiero

Senior UX Researcher

Thumy Phan

Illustrator

How to Run Benchmarking Studies in Dscout

February 11, 2026

Overview

Learn how to run a benchmarking study in Dscout—from scoping your focus to analyzing results and identifying areas for improvement.

Contributors

Claire Ruggiero

Senior UX Researcher

Thumy Phan

Illustrator

Before we get into the ins and outs of running a benchmarking study in Dscout, it helps to level-set on what benchmarking actually is—and what it isn’t.

At its core, benchmarking is about establishing a baseline. It gives you a repeatable way to measure experience quality over time, across flows, or against alternatives, so you can move beyond one-off findings and track meaningful change. Rather than being something to avoid, benchmarking is most useful when you want clarity and consistency in how you evaluate performance.

Researchers use benchmarking to create durable baselines they can return to again and again. Designers use it to see whether iterations are truly improving the experience. And product managers use it to align teams around what “good” looks like, and measure progress in a way that’s easy to share and understand.

In this guide, we’ll walk through how to run a benchmarking study in Dscout—from setting up your approach to turning results into action.

Why run a benchmarking study?

Before you set up your study, you’ll want a clear understanding of what you’re trying to evaluate. There are three primary reasons to benchmark:

1. To assess your organization’s current performance and identify gaps

Benchmarking can add broader context to more passive and internal performance metrics like CSAT. For example, a low CSAT score tells you “what”, but it can be difficult to pull the “so what?” from that number. Benchmarking helps you dig even deeper into user feedback to form baselines that you’ll use to measure your product’s trajectory against.

2. To better understand the competitive landscape

Benchmarking provides insights into how you compare to competitors in your industry, highlighting your strengths and weaknesses.

3. To drive continuous improvement

Benchmarking is growing in popularity—especially as continuous research continues to expand across organizations. Including benchmarking in continuous research fosters a culture of continuous improvement by setting realistic performance targets aligned with industry best practices.

Step-by-step guide to benchmarking with Dscout 

Once you’ve decided to benchmark, it’s time to set up your study. While there are a number of ways you can benchmark in Dscout, I’ll take you through the general flow and highlight a couple of examples.

Step 1: Define your benchmarking focus

Select a product flow or experience to evaluate. Within this, identify the core tasks a user might navigate, and use these as a way to better measure the flow’s overall success.  

For example, when we ran an internal benchmarking study, we wanted to get more color on our CSAT scores. We decided to focus our own efforts on the recruitment flows and study design flows in our usability tool. Within these flows, defining recruitment criteria, and drafting a Task question were just a couple of core tasks we underlined, respectively.

Step 2: Define your participant criteria

When selecting your participants, you have the option to:

  • Auto-recruit from the Dscout pool, our pool of “Scouts,” vetted/verified participants. 
  • Add participants from your pre-existing screeners.
  • Invite your own participants via an external share link or Dscout Private Panels.

Step 3: Design your study

There are many ways to run a benchmarking study in Dscout! Teams have a lot of flexibility to uncover the insights they need.

One way to run a benchmarking study is via our usability testing tool. 

Here’s how to set it up… 

  1. Via the home screen, create a “usability test.”
  2. Ask participants to conduct core tasks within your product or site (for our purposes, we asked our own participants to create* a screener and design a study).
  3. After each task, prepare a Single Ease Question (SEQ) to gauge participant satisfaction and ease of use. Within the Usability tool, you’ll also have the option of including a self-reported completion question and/or selecting a success screen in your prototype (if using one), to keep better track of success rates and fail rates, another metric of interest within benchmarking efforts.
    • For example, our participants saw this question after each task:
      Overall, how difficult or easy did you find this task?
      1 (Very difficult) - 7 (very easy)"
  4. At the end of the study, one option is to ask participants to take a System Usability Scale (SUS) survey—a standard 10-question measure of perceived usability for the flow as a whole. Remember to design this into your study if you’d like to measure it!

Note: We intentionally chose constructed tasks over capturing organic behavior to ensure data consistency and reduce participant burden.

Another great option is to invite participants to a media survey. 

Here’s how to set it up…

  1. Via the home screen, create a “Media Survey.”
  2. Compared to Usability testing, you won’t have the option of continuous recording as they complete tasks, though you still have the option of linking participants to a prototype or website containing your flow of interest. A quick Media Survey can be a lighter lift for participants, especially if you’re hoping to catch them mid-stream, or very shortly after. 
  3. We leveraged an internal user list to invite participants who had freshly launched a Usability mission into our Media Survey. This Media Survey contained only the SUS survey, as well as some more customized single select questions that were specific to the core tasks we were interested in. For those that reported struggling with any core tasks, we directed them to an open ended follow-up to provide a bit more detail.

Step 4: Run your study and collect data

After you’ve designed your study and recruited your participants, it’s officially launch time.

One key perk of running a benchmarking study in our usability testing tool is that Dscout offers an excellent continuous-recording feature. 

As participants complete tasks, their actions are recorded throughout the process. You can see precisely where they fumbled, where they caught themselves, and, step by step, how they navigated the core tasks. 

For the most part, a benchmarking study with the Media Survey tool will be like any other. Your entries will trickle in, and you can keep an eye on responses with charts and other data visualizations in the Responses tab. 

If you’ve included a standardized ease of use questionnaire (like SUS) in either mission type, you’ll likely need to isolate this data and convert the raw responses into score contributions, though this is quick and simple once exported into an Excel sheet. 

If you’re relying on just SEQ questions in Usability, you’ll get this score automatically.

Step 5: Analyze results and extract insights

For our own study, it was exciting to go back and watch the continuously recorded responses. Users completed the 10-question closed-ended SUS survey while being recorded, and most spoke aloud as they rationalized their responses. 

When I got our score, it was extremely helpful to look back at all the task recordings, note where people were struggling, and get more color in contextualizing how that score was achieved.

Overall, participants DID find the usability tool satisfactory, but there were some notable points of friction in core tasks, which I summarized and defined for the product team to tackle. It was great to be able to make reels or clip specific instances where someone got confused with the UI, and I could just show that directly to our team of designers to address—which they have!

Pro-tips for benchmarking in Dscout

Benchmarking is rarely about chasing a perfect score, it’s about learning in context and making adjustments.

Because you’re often blending structured metrics with real human behavior, a few small choices can make a big difference in how useful your results end up being. 

Keep these pro-tips in mind as you benchmark to get insights that are not just measurable, but genuinely actionable.

  • Embrace the hybrid nature. You’re using a qualitative tool to collect quantitative data—lean into that duality.
  • Design for talk-aloud moments. If you're using Usability, expect participants to narrate. Make space for it—it’s valuable.
  • Balance rigor with reality. Some methodological purists might balk at the compromises here—but in practice, the combination of SEQ + SUS + recorded feedback gives you a solid foundation for usability insights.

You may also like...

HOT off the Press

More from People Nerds