Skip to content
Ideas

How to Create a UX Scorecard (8 Steps)

If you're faced with questions such as "How does our product compare to others in the industry?" or "How has our product changed over time?" Try creating a UX scorecard. 

Words by Nikki Anderson, Illustration by Allison Corr

For a large portion of my early career, I focused on formative evaluations, which look at how a product/prototype works/doesn't work. These evaluations often came in the form of usability testing—I would take small slices of the product or new ideas and collect usability issues.

This approach was helpful until I got get specific questions from stakeholders where usability testing didn't feel like the correct method:

  • How does our product compare to others, such as competitors or similar products in the same industry?
  • How has our product experience changed over time?

Other formative methods (such as tree testing for hierarchy and navigation) also weren't appropriate for these questions, as they are better suited for summative evaluations.

Eventually, I learned about benchmarking, which looks at a product's experience over time. I loved benchmarking because, for once, I felt I could prove the ROI of research by showing improvements over time.

Unfortunately, I didn't always have the resources necessary for a benchmarking study, but that didn't stop the above questions from rolling. I tossed around multiple ideas on how to approach this—until I learned about UX scorecards.

What are UX scorecards and when should you use them?

UX scorecards can help with summative evaluations and address the types of questions above. The goals of UX scorecards are to:

  1. Assess the overall usability of a product
  2. Track usability over time
  3. Compare the product with competitors

There are several different ways to go about UX scorecards and evaluate your product on a summative level.

A heuristic evaluation is one way to evaluate your product against industry standards. A heuristic evaluation is an overall review of the user experience of your product, website, or app.

You look for gaps in the experience and judge your product/website/app against common usability heuristics such as Jakob Neilsen's 10 Usability Heuristics for User Interface Design.

However, there are times when I’ve found a heuristic evaluation wasn’t quite enough. In some scenarios, the results have gotten lost in translation as some of the heuristics were difficult for stakeholders to understand or even too difficult for me to assess.

Of course, that got easier with practice and watching other experts. But I wanted to go deeper into an evaluation beyond these heuristics, so I decided to try UX scorecards.

How to create a UX scorecard

The first time I created a UX scorecard, I wasn't sure exactly which components I should include or how to make it impactful. After some research (and practice), I finally created an approach. Here are the exact steps I take to create UX scorecards:

1. Think about your goals

The first step is to ensure your goals align with a UX scorecard. I mentioned some goals above, but here are the most common project goals I have in mind when I decide to use UX scorecards:

  1. Understand the end-to-end usability of a product when it comes to efficiency, effectiveness, and satisfaction
  2. Identify the issues and problems across an entire product
  3. Uncover the differences in usability between our product and the industry benchmarks or competitors
  4. Track the usability of our product, precisely efficiency, effectiveness, and satisfaction, over time

If your goals are similar to the above, a UX scorecard might be the right approach for you.

2. Decide on the metrics

The point of a UX scorecard is to evaluate your product, so the next step is to decide which metrics you will use.

There are standard metrics for usability, and I have relied heavily on them to create UX scorecards. The metrics I use to measure usability are:

Task-level:

  1. Task success (effectiveness): Whether or not someone can complete a task or struggles on a task
  2. Time on task (efficiency): The amount of time it tasks for someone to complete, or give up on, a task
  3. Single Ease Questionnaire (ease of use): Measures the perceived difficulty or ease of a task
  4. Single Usability Metric (usability): Combines task success rates, time on task, satisfaction, and error counts into one metric to define a task experience. If you use this metric, you need to ensure you track task success, time on task, satisfaction, and the number of errors per task.
  5. User confidence (perception): asks how confident users are that they completed a task successfully

Product-level:

  1. System Usability Scale (satisfaction): Looks at the overall usability and satisfaction of a product
  2. Standardized User Experience Percentile Rank Questionnaire (performance): Measures users' perceptions around usability, trust and credibility, appearance, and loyalty

All of the metrics, specifically the questionnaires above, are standardized at measuring usability and satisfaction. I use these over creating my own as they are more valid and reliable.

3. Choose and write the tasks

As you can see above, while some metrics are on the product level, many are on the task level, so the next step is choosing the tasks you will test. I often get asked how I select the tasks, especially when dealing with a complex domain. Whenever choosing tasks for a UX scorecard study, I always choose the most critical tasks for users.

The most critical tasks are those that a person would need to complete to achieve their goals. Another way to look at it is by thinking about the point of the product. For example, if I worked at a travel company that sold tickets to people, the most important tasks would be:

  1. Searching for a destination
  2. Selecting a ticket
  3. Filling out any necessary forms (billing, travel information, etc.)
  4. Purchasing the ticket
  5. Finding/downloading the ticket once purchased

I always ask myself, "What are the actions people must do to achieve what they want to and what we want them to do?" I then brainstorm all the tasks and ask for feedback from stakeholders.

Once I define a list of tasks, I write them in a realistic and actionable way. If you aren't sure how many tasks to include, think about how much time you have and the complexity of each task. Then, conduct several dry runs internally to determine the ideal number of tasks. Generally, I include 5-10 tasks in a 60-minute test.

4. Create your "grading rubric"

Once you have defined the metrics and tasks, it is time to create a grading rubric. The entire point of a UX scorecard is to grade your product, often against others, so it is essential to understand how you will score the above tasks on the chosen metrics.

Whenever using UX scorecards, I grade on a scale from A to F, as it is a standard mental model for people reading the scorecards. They know "A" means we are doing well, "C" is average, and "F" means we are messing up.

Now, unfortunately, we can't just stick with those definitions. Instead, we need to ensure our grading rubric is more granular and applicable to each of our above metrics.

Here is my overall definition of the grades:

  • A: The task and overall experience are functional, reliable, extremely easy to use, and delightful for the customer.
  • B: The user can complete tasks and use the product without a problem. The product is functional, reliable, usable, and generally satisfactory.
  • C: The user can complete tasks and use the product but might have a light cognitive load, including some hesitation, struggle, or confusing steps. The product is functional and somewhat reliable, and relatively usable.
  • D: It is difficult for the user to complete tasks and use the product. The user sometimes fails at certain tasks. The product is somewhat functional and reliable.
  • F: Users cannot complete the tasks or use the product, and the experience is viewed as very poor. The product might be functional but is not considered reliable, usable, or satisfactory.

5. Determine your competitors

Since UX scorecards are about comparison, an important step is to choose which competitors or similar products to assess. This will give you a straightforward way to benchmark your product compared to others in your industry. If you aren't sure who to choose, ask your colleagues or senior leadership about the companies on their minds when it comes to your industry.

6. Internal, external, or both

You primarily create UX scorecards by evaluating a series of tasks. I haven't mentioned methodology yet because completing tasks and assessing a product fall under usability testing. However, there is one central question regarding UX scorecards, to run it internally, externally, or both?

You complete heuristic evaluations without speaking to users, and you can also test internally. Internal testing is not an excuse to never go out and test with users. However, depending on scope and budget, you might need to start internally testing your product and competitors. Just make sure you report on this when sharing your findings and always follow up with external testing.

7. Conduct your study

Conducting a UX scorecard study is similar to a benchmarking or usability test. Check out my usability test checklist and how to tackle a benchmarking study for more details.

8. Create your scorecard

Once you've run your study, you can put together your UX scorecard. I typically create several scorecards:

  1. A general scorecard looking at comparisons and averages of the metrics
  2. A series of specific scorecards that look at how different products perform at specific tasks

Here is an example of the general scorecard:

And an example of a specific scorecard (by task):

Overall, the most important thing you can do in your scorecards is to make straightforward comparisons between tasks and products. I use a variety of letter grades and colors to report really clear metrics and give very clear comparison points. Check out my template on Canva and give it a try!

Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs. 


To get even more UXR nuggets, check out her user research membershipfollow her on LinkedIn, or subscribe to her Substack.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest