Customer satisfaction is a great way to help us understand how our product is doing over time, especially when it comes to tracking the ROI or impact of user research.
The first time I used a customer satisfaction metric at an organization was through the commonly used Net Promoter Score (NPS). We used the NPS to track whether or not people would recommend our platform to others, such as a colleague, family member, or friend.
Even when I was first introduced to and using the NPS, I didn't feel it encompassed the word satisfaction. However, I didn't know or understand how to define satisfaction when it came to the usage of digital products. So, I went with it.
A new approach to customer satisfaction
Years later, and still somewhat disappointed with the NPS and its inability to tell us anything helpful, I started looking into other ways to measure customer satisfaction. And, with that, I attempted my definition of satisfaction that didn't include whether or not someone recommended us. Instead, I wanted satisfaction to tell us something, to give teams more actionable information.
Over time, I developed a more pointed definition of customer satisfaction. Customer satisfaction looks at a given solution's efficiency, effectiveness, functionality, and reliability. For example, users will only be satisfied if a product functions reliably and makes tasks easy and quick.
Essentially, satisfaction is the degree to which a product or service aligns with customers' needs, eradicates pain points, and enables them to achieve their goals.
The NPS didn't make much sense to me, especially with this definition. Instead, there were a lot of additional ways to measure customer satisfaction that would be better suited and give teams more actionable information.
How I measure customer satisfaction
There is no one golden metric for measuring satisfaction. Instead, we should bring together multiple metrics to help identify and pinpoint real problems and potential solutions.
There are a few different metrics I now use when measuring customer satisfaction. These metrics have helped create more actionable and pointed insights for my teams and are much more specific than the broad strokes of the NPS.
Within these metrics, there are two types of satisfaction we will look at:
- Performance satisfaction, which has to do with a user's satisfaction and attitude about a particular task
- Perceived satisfaction, which has to do with a user's satisfaction and attitude about a product in general
Ideally, we bring both metrics into our practice to get a holistic picture of satisfaction. For example, perception satisfaction will reveal what users think of your website or application, while performance satisfaction will indicate what to fix to improve satisfaction.
Perceived satisfaction metrics
✔ System usability scale
When I was first introduced to the System Usability Scale (SUS), I thought I had hit gold. The SUS is a 10-point questionnaire commonly used and widely cited for assessing the perceived ease of using a website, app, or platform.
It includes the following questions:
- I think that I would like to use this system frequently
- I found the system unnecessarily complex
- I thought the system was easy to use
- I think that I would need the support of a technical person to be able to use this system
- I found the various functions in this system were well integrated
- I thought there was too much inconsistency in this system
- I would imagine that most people would learn to use this system very quickly
- I found the system very cumbersome to use
- I felt very confident using the system
- I needed to learn a lot of things before I could get going with this system
Usually, you administer the SUS either after you complete a usability test or agnostic from any testing on an ongoing basis. Since it is at the end of a test or unassociated with one, it is a perceived satisfaction metric. Therefore, when answering the questions, survey participants think about their general experience with a platform, website, or app.
The SUS gives an excellent overall understanding of perceived ease of use, which is related to overall satisfaction (as mentioned above).
✔ Usability metric for user experience
The UMUX and UMUX-Lite are newer metrics than the SUS. However, the benefits of these metrics are that they are shorter and the questions better target the most updated definition of usability (a combination of efficiency, effectiveness, and satisfaction).
The UMUX is four questions, with two positive and two negative items, with a seven-point response scale:
- [This system's] capabilities meet my requirements
- Using [this system] is a frustrating experience
- [This system] is easy to use
- I have to spend too much time correcting things with [this system]
The UMUX-Lite is a further improvement on the UMUX, containing only two questions with a seven- or five-point response scale:
- [This system's] capabilities meet my requirements
- [This system] is easy to use
✔ Satisfaction scales
General satisfaction scales are another way to understand the perceived satisfaction of a product. The most commonly used satisfaction scale is a bipolar scale, where the opposites are on each end of the scale.
A bipolar satisfaction scale would look something like this:
How satisfied or dissatisfied are you with [X]?
- 1 = Very dissatisfied
- 4 = Mixed (equally satisfied and dissatisfied)
- 7 = Very satisfied
✔ Disconfirmation scales
Disconfirmation scales examine how well or unwell a product meets users' expectations. They are certainly less common but something to consider.
A disconfirmation scale might look something like this:
How was your experience with [X]?
- 1 = Much worse than I expected
- 4 = About what I expected
- 7 = Much better than I expected
Remember that these scales are less widely studied and may better correlate with customer retention than satisfaction or performance scales.
Performance satisfaction metrics
✔ Single ease questionnaire
Regarding efficiency being tied to satisfaction, the Single Ease Questionnaire (SEQ) is a great metric to look into when it comes to customer satisfaction. Since it is a performance satisfaction metric, you ask it right after a user completes a usability test task.
The question is:
Overall, how difficult or easy was the task to complete?
- 1 = Very difficult
- 4 = Neither difficult nor easy
- 7 = Very easy
✔ After scenario questionnaire
The ASQ, similar to the SEQ, assesses how difficult a user perceived a task in a usability test. It's a bit outdated now that the SEQ is around, but I wanted to give an alternative to the SEQ.
The ASQ covers efficiency, effectiveness, and the level of support received throughout a scenario or task.
The ASQ has three statements with a seven-point (1 = Strongly disagree, 7 = Strongly agree) response scale:
- Overall, I am satisfied with the ease of completing the task in this scenario
- Overall, I am satisfied with the amount of time it took to complete the task in this scenario
- Overall, I am satisfied with the support information (online help, messages, documentation) when completing the task
✔ Confidence rating
Confidence in what we do within a product is essential to the user experience. Just because we’re completing our tasks doesn't mean we feel good about it. When users are confident in what they do (and can do) within your product, it helps raise satisfaction levels.
To measure confidence, you ask:
Overall, how confident are you that you completed the task successfully?
- 1 = Not at all confident
- 4 = Not sure
- 7 = Extremely confident
Using confidence alongside task success is a great way to understand where problems might occur in your experience. High self-reported confidence and task failure can help you diagnose significant issues. You don't want users to be confident they completed a task when they did so incorrectly.
How to set up a customer satisfaction practice
Now that we've covered some ways to measure customer satisfaction, how do you set up a customer satisfaction practice at your organization? Perceived satisfaction is more difficult (and lengthy) to set up, as you generally use performance satisfaction during usability tests.
There are quite a few steps to set up a perceived satisfaction practice, but I promise it's worth it! I've also created a guide with examples you can use to dive into this process even further!
Here are the steps I go through to set up a perceived satisfaction practice:
1. Understand the general experience or journey of your users so you can know the different potential interactions of your users.
2. Identify where you support users the least in their journey, as these are great trigger points for understanding customer satisfaction. Users tend to remember their low points the most.
3. Decide on the metrics you want to use for your satisfaction practice.
4. Choose how you will reach out to users (and how often). You can reach out to users in many ways, such as through an in-platform survey, email, or dedicated study. You should also decide how often to reach out to customers. There are several options:
- Continuous measurement, where you continuously gather satisfaction metrics
- Periodic intervals, such as every quarter or six months
- Project-based intervals, such as after specific usability tests
5. Bring together qualitative and quantitative data. Remember that these metrics will tell you what is happening and help you identify issues but won't tell you why the problems are occurring. As much as you can, it's vital to bring in qualitative data to help understand the full scope of the problem.
6. Have a space to record and track feedback, such as a dashboard or Miro board, where you can analyze the feedback on an ongoing basis. I usually understand the score and add qualitative feedback for evidence and more pointed recommendations.
7. Make improvements and take action based on the feedback. I usually do this through an ideation workshop or an internal hackathon.
8. Acknowledge and record your progress. Understand how the satisfaction scores change over time, especially after you've improved the experience. Recording the changes to satisfaction is a great concept to showcase in a case study.
Setting up a customer satisfaction practice at your organization is challenging but extremely rewarding. It’s an excellent way to get continuous feedback and data from customers and can showcase your impact as a user researcher!
Ready to set up your own customer satisfaction practice?
In this free guide, I break down each of the steps listed above with concrete examples to help you keep track of your customer satisfaction over time and better demonstrate the impact of your research.
Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs.
To get even more UXR nuggets, check out her user research membership, follow her on LinkedIn, or subscribe to her Substack.