March 7, 2023
March 7, 2023
User research can come in waves. For a few weeks, you’ll run around talking to many participants and internal stakeholders, then suddenly, boom. Nothing.
For whatever reason— be it budget, buy-in, pandemics, or the season— user research comes to a screeching halt. You feel unproductive, stuck, less motivated, and unable to provide insights for your team. At least, this is a part of the user research ocean I have experienced.
When user research simmers, and you can't speak with users, what can you do to continue providing value?
During this time, one concept I always turn to is conducting a heuristic evaluation.
What is a heuristic evaluation?
A heuristic evaluation is an overall review of your product, website, or app, with regards to the user experience. You are looking for gaps in the experience, and judging your product/website/app against common usability heuristics.
You are discovering if your product/website/app violates the heuristics. Hence the name, heuristic evaluation.
Many people use a set of ten common heuristics from Jakob Neilsen's 10 Usability Heuristics for User Interface Design:
See some concrete examples of the heuristics in this article.
Heuristic evaluations are not a replacement for usability testing or speaking to users. They provide a foundation for improving the experience on the side of user testing, or before you go into a usability test.
Nikki Anderson-Stanier, Founder at User Research Academy
Why conduct a heuristic evaluation?
There are some great reasons why heuristic evaluations are helpful, especially during the down times of user research. Overall, heuristic evaluations allow you to:
Heuristic evaluations are not a replacement for usability testing or speaking to users. They provide a foundation for improving the experience on the side of user testing, or before you go into a usability test.
Because you will be finding issues does not necessarily mean you will get answers. Proper usability testing is essential to ensure you are building the correct solution.
Some challenges come with conducting heuristic evaluations, such as:
How do I get started?
When I first started understanding how to conduct heuristic evaluations, the best thing I did was to practice. I strongly recommend using three to five evaluators. Having more than one evaluator helps avoid false alarms and give priority to the issues found. If all the evaluators rate something as critical, that issue should go to the top. Here are the steps I use (and share with others) to conduct a heuristic evaluation:
The first step to any research project is to understand the scope and objectives of the research. Will you be evaluating the entire product? If you look at a whole project, you must assess every page and interaction. You can also break up the heuristic evaluation into smaller parts, such as focusing on the registration flow, checkout, or navigation.
When evaluating a previous product, I focused on one section at a time, such as the checkout funnel. Once we assessed each piece separately, we went through the entire product to make sure it was consistent.
It is imperative to understand your user's goals and motivations for using the product. If you don't operate from a user's perspective, the evaluation may not pick up on important issues that would improve the experience for your users.
I always have user personas present during the evaluation. We pick one persona and focus on that particular group of users when going through the assessment.
As mentioned, there are a few different sets of heuristics. You can also create your own if you are an advanced evaluator. I always recommend the heuristics above, as they are widely used and validated. Other heuristics include:
People may consider problems differently, and the severity of each problem could vary from one evaluator to the next. It's essential to sit in a room together and define the different severity ratings of each issue.
I have used the following severity ratings:
I typically frame the evaluation with an overarching scenario the user is going through. Using a task makes it easier to get into the user's perspective and allows the evaluators to remember the user's goals.
Now comes the most fun and complicated part. Sit alone (never evaluate together) and go step-by-step through each interaction on each section you have decided to assess. Interact with each element and see if the elements violate any of the heuristics. I keep a sheet of paper in front of me with the definitions (and examples) of each heuristic. Give yourself a few hours to properly evaluate.
If you are conducting a heuristic evaluation of a full product, it may take one or two days. There are a few ways you can record the heuristic evaluation. Regardless, I always include annotated screenshots that visually highlight the violations. Here are some ways I structure the review:
Bring together all of the different evaluators and their findings. Add up the number of times an issue occurred across evaluators and the average severity of each violation. The most frequent problems, and the higher their severity, the more prioritized they become. For instance, if each evaluator encountered a problem with the search field, and rated it as a major violation, that issue should get a higher priority than a cosmetic or minor issue.
Overall, what you want out of a heuristic evaluation is a clear list of usability problems, which heuristics they violate, and how severely they impact the user. With this information, designers can make quick and informed changes to improve the experience—especially when your team doesn't have the resources to conduct more in-depth studies.
Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs.
To get even more UXR nuggets, check out her user research membership, follow her on LinkedIn, or subscribe to her Substack.