Skip to content
Ideas

Level Up Your Usability Tests with SUS and SEQ

See the pros and use cases for each technique and which caveats to look out for.

Words by Nikki Anderson, Visuals by Allison Corr

After years of solely focusing on qualitative research, I finally got on the survey bandwagon and it felt like a whole new realm of possibilities. I could now create questions to ask a large sample of users about their behaviors and feelings and follow up on qualitative studies.

After polishing up my survey-writing skills and sending out numerous projects, my teams were happy to finally get some quantitative data to pair with all of our qualitative insights. Questions about sample size came up less frequently, and I felt more confident in my research presentations.

However, there was still an area I was struggling in. I continued to conduct usability tests, ensuring that I recruited 5-7 participants per segment, but it didn't seem like enough. I couldn't give my team the same confident answers, and I had difficulty quantifying my results unless I recruited a lot more participants.

After a while, I started creating survey questions to ask participants how they felt about prototypes or our product, but I could never find the correct wording. All of my attempts felt wishy-washy and too qualitative.

Around that time, I happened to be attending a meet-up in New York City where one of the presenters spoke about user research metrics. I was hooked and immediately went home and Googled all the ways I could apply them to my work.

From then on, I never looked back.

User research metrics

There are many different types of metrics when it comes to user research. Incorporating quantitative data comes in many shapes and sizes, from product analytics to survey data.

But it wasn’t until the meet-up that I learned about metrics that focused on the interaction between users and the product. These were the missing key to my presentations about usability tests and allowed me to go beyond my small sample sizes.

Here are the two metrics I now utilize the most in my research studies:

Single-ease questionnaire (SEQ)

The single-ease questionnaire is a one-question "survey" you can ask at the end of each task during a usability test. The SEQ is worded and labeled as:

"Overall, how difficult or easy did you find this task?" OR "Overall, this task was..."

1 = Very difficult

7 = Very easy

The pros:

  • The SEQ is a short questionnaire, being just one question. This makes it simple to answer for participants and easy to administer
  • The SEQ works better than other, more complicated metrics, such as the Subjective Mental Effort Questionnaire (SMEQ)
  • You can use it across many products, software, and technologies
  • The SEQ has been studied and widely used, making it a reliable and valid metric
  • You can use it to compare yourself to competitors or other similar products
  • You can see where the most difficult tasks and problematic areas are in your product
  • The SEQ correlates to measures like task success and time on task

How to use it:

You use the SEQ during usability testing. After a participant completes a usability task, you immediately administer the SEQ. Optionally, you can ask the participant why they rated it the way they did, but it isn't necessary. You repeat this with all the tasks in the usability test.

Once you have conducted the test, you look across all the tasks to see the weight users gave to the different tasks. You can do this task by task to find areas of improvement. For example:

Task one: Update your password

User 1

  • SEQ score: 7
  • Why: Unable to find account settings

User 2

  • SEQ score: 6
  • Why: Unable to locate password change

User 3

  • SEQ score: 5
  • Why: Struggled to find account settings

Through this, we can see that this task felt difficult for users because they could not locate the key areas needed to achieve this task. With this knowledge, you can go back to your team to improve the experience.

Caveats:

As with any metric, we need to keep in mind some variables:

  • Participants can give a "difficult" score to a task that has good usability but is complex
  • Users can fail (or take very long on) on a task yet mark it as very easy, which you would need to look into after
  • Use the SEQ alongside time on task and task success to make it even more reliable

System usability scale (SUS)

Unlike the SEQ, which looks at task-related usability, the SUS looks across the entire test or experience. What that means is, instead of looking at each task, the SUS gives you the big picture understanding of the participant's overall impression of usability and experience.

The SUS has ten questions, using a scale of 1-5; 1 = strongly disagree, 5 = strongly agree:

  • I think that I would like to use this system frequently.
  • I found the system unnecessarily complex.
  • I thought the system was easy to use.
  • I think that I would need the support of a technical person to be able to use this system.
  • I found the various functions in this system were well integrated.
  • I thought there was too much inconsistency in this system.
  • I would imagine that most people would learn to use this system very quickly.
  • I found the system very cumbersome to use.
  • I felt very confident using the system.
  • I needed to learn a lot of things before I could get going with this system.

The pros:

  • The SUS is easy to administer and easy for participants to understand/complete.
  • It is an industry-standard, used widely, which makes it a reliable and valid survey.
  • You can use the SUS over many different technologies, products, and software.
  • Since it is an industry-standard, you can often find competitors' SUS scores available (the SUS average is 68).
  • You can use the SUS to benchmark your product over time.

How to use it:

You can use the SUS in two ways:

  • After a usability test to assess the general experience and usability
  • On a website to measure the usability of a product over time

For example, let's say you ran a usability test. After that usability test, you would administer the SUS to each participant. Once you have completed the usability test, you will find your SUS score:

  • For each of the odd-numbered questions, subtract one from the score
  • For each of the even-numbered questions, subtract their value from 5
  • Take these new values which you have found, and add up the total score
  • Multiply the total by 2.5

This could look like:

  • Odd scores: 3, 5, 3, 1, 3 -> 2, 4, 2, 0, 2
  • Even scores: 2, 4, 2, 4 -> 3, 1, 3, 1
  • Add the new values -> 18
  • Multiple the total by 2.5 -> 45

Your SUS score is 45

The other option is to use the SUS more passively, as a pop-up, to assess the overall experience. I have done this in the past and have received some data through this method. However, I will mention that it is not as reliable as the other approach.

For instance, some people could come to your website, see the SUS and respond blindly without using the website, or they could have only used a portion of the website. Therefore, instead of a pop-up, I have used the SUS at the end of a flow, such as after check-out.

Caveats:

As with any metric, we need to keep in mind some variables:

  • Scoring the SUS can be complex and challenging. It is best to check online to ensure you are scoring correctly
  • Although the scoring is from 0-100, they aren't percentages; they are scores
  • It does not point out specific issues or problems

Other metrics

Although the SUS and SEQ are my favorite metrics, there are others you can try:

  • Subjective Mental Effort Questionnaire (SMEQ), which measures the mental effort participants felt was involved in completing a task
  • After-Scenario Questionnaire (ASQ) has three questions that assess the usability of a task
  • NASA-TLX, which looks into the perceived workload that it took participants to complete a task
  • SUPR-Q measures the usability and quality of a product's user experience

I have used all these metrics and have found the SEQ and SUS to be the best performing. However, all of these metrics have been validated across many studies. So instead of trying to pick the best, objective wording, using these metrics helps ensure your data will be valid and reliable.

Triangulate your data

As with any survey data, this information is self-reported, so it is helpful to use other methods to help qualify the data. For instance, people may perceive a task to be easy, but product analytics may show a large number of people dropping off or failing to click-through to the next step, signifying a problem with the flow.

Additionally, gathering low levels of satisfaction does not tell you where the issues are. Instead, combine a metric like the SUS with a follow-up question about why users gave a particular score or use 1x1 interviews for diving deeper.

As always, triangulating your data is the most effective way to get valid and reliable insights.

Nikki Anderson is the founder of User Research Academy and a qualitative researcher with 8 years in the field. She loves solving human problems and petting all the dogs. Explore her research courses here or read more of her work on Medium.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest