How (and Why) to Conduct a Heuristic Evaluation
Heuristic evaluations give you insight into the overall experience of your product, and where it falls short. Here’s how you can get started conducting one today.
User research can come in waves. For a few weeks, you’ll run around talking to many participants and internal stakeholders, then suddenly, boom. Nothing.
For whatever reason— be it budget, buy-in, pandemics, or the season— user research comes to a screeching halt. You feel unproductive, stuck, less motivated, and unable to provide insights for your team. At least, this is a part of the user research ocean I have experienced.
When user research simmers, and you can't speak with users, what can you do to continue providing value?
During this time, one concept I always turn to is conducting a heuristic evaluation.
Jump to:
What is a heuristic evaluation?
A heuristic evaluation is an overall review of your product, website, or app, with regards to the user experience. You are looking for gaps in the experience, and judging your product/website/app against common usability heuristics.
You are discovering if your product/website/app violates the heuristics. Hence the name, heuristic evaluation.
Many people use a set of ten common heuristics from Jakob Neilsen's 10 Usability Heuristics for User Interface Design:
- Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within a reasonable time.
- Match between system and the real world: The system should speak the same language as the users. This means using words and phrases familiar to users rather than jargon. Follow real-world conventions, making information appear in a natural and logical order.
- User control and freedom: Users often get into situations or open features they did not intend to. In this case, they need an "emergency exit" to leave the unwanted state without going through a multitude of steps. Support undo and redo.
- Consistency and standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Use consistency when building products and features.
- Error prevention: Prevent problems from occurring for the users. Eliminate error-prone conditions, check for them, and present users with a confirmation option before committing to an action.
- Recognition rather than recall: Don't make users memorize information from one step to the next. Minimize the user's memory load by making objects, actions, and options visible. Any instructions or need for the next step should be easily visible.
- Flexibility and efficiency of use: Allow different levels of users to access the product. Novice users and power users should be able to use the product. Allow power users to streamline their work.
- Aesthetic and minimalist design: Keep only the most essential information available for the user. When there is too much information, you diminish the importance of the text. Give the user what they need to get through the experience seamlessly.
- Help users recognize, diagnose, and recover from errors: Error messages should include plain language that easily allows the user to resolve the issue. Indicate the problem and the next steps for a solution. Avoid error codes the user will not understand.
- Help and documentation: Any instructions, help, or documentation the user needs should be easy to search for and find. It should be accessible at any step of the experience. A list of concrete steps that focus on the tasks helps the user navigate painful experiences
See some concrete examples of the heuristics in this article.
Heuristic evaluations are not a replacement for usability testing or speaking to users. They provide a foundation for improving the experience on the side of user testing, or before you go into a usability test.
Why conduct a heuristic evaluation?
There are some great reasons why heuristic evaluations are helpful, especially during the down times of user research. Overall, heuristic evaluations allow you to:
- Identify and focus on specific issues without having to speak to users
- Discover usability problems with individual elements and how they impact the overall user experience
- Provide quick and inexpensive feedback to designers
- Gather and give feedback early in the design process
- Conduct usability testing to further identify and understand problems
- See improvements in important business metrics, such as bounce rate, user engagement, and click-through rate
Heuristic evaluations are not a replacement for usability testing or speaking to users. They provide a foundation for improving the experience on the side of user testing, or before you go into a usability test.
Because you will be finding issues does not necessarily mean you will get answers. Proper usability testing is essential to ensure you are building the correct solution.
Some challenges come with conducting heuristic evaluations, such as:
- Choosing the appropriate heuristic is extremely important—if the wrong heuristic is picked, some issues might be ignored
- Heuristic evaluations can be relatively time-consuming compared to methods, such as quick usability testing. Understanding how to properly evaluate a product takes time to learn
- Problems identified by evaluators can often be false alarms and not priorities to work on
- Multiple evaluators are necessary to ensure consistency and lessen the likelihood of false alarms
- Heuristic evaluations may identify more minor issues and fewer significant issues
- If you are not hiring an expert evaluator, and the in-house UX team is evaluating the product, there may be a level of bias in the results
How do I get started?
When I first started understanding how to conduct heuristic evaluations, the best thing I did was to practice. I strongly recommend using three to five evaluators. Having more than one evaluator helps avoid false alarms and give priority to the issues found. If all the evaluators rate something as critical, that issue should go to the top. Here are the steps I use (and share with others) to conduct a heuristic evaluation:
✔ Define what you will evaluate
The first step to any research project is to understand the scope and objectives of the research. Will you be evaluating the entire product? If you look at a whole project, you must assess every page and interaction. You can also break up the heuristic evaluation into smaller parts, such as focusing on the registration flow, checkout, or navigation.
When
evaluating a previous product, I focused on one section at a time, such
as the checkout funnel. Once we assessed each piece separately, we went
through the entire product to make sure it was consistent.
✔ Know your user's behaviors and motivations
It is imperative to understand your user's goals and motivations for using the product. If you don't operate from a user's perspective, the evaluation may not pick up on important issues that would improve the experience for your users.
I always have user personas present
during the evaluation. We pick one persona and focus on that particular
group of users when going through the assessment.
✔ Choose which heuristics you will use
As mentioned, there are a few different sets of heuristics. You can also create your own if you are an advanced evaluator. I always recommend the heuristics above, as they are widely used and validated. Other heuristics include:
- Jill Gerhardt-Powal's 10 Cognitive Engineering Principles
- Alan Cooper's About Face 2.0: The essentials of Interaction Design
- Ben Schneiderman's Eight Golden Rules of Interface Design
✔ Set up the way you will identify issues
People
may consider problems differently, and the severity of each problem
could vary from one evaluator to the next. It's essential to sit in a
room together and define the different severity ratings of each issue.
I have used the following severity ratings:
- Cosmetic issue; does not impact task completion
- Minor usability problem; the user can still complete the task
- Major usability problem; the user struggles to complete the task
- Critical usability problem; the user cannot complete the task
✔ Define the task(s)
I typically frame the evaluation with an overarching scenario the user is going through. Using a task makes it easier to get into the user's perspective and allows the evaluators to remember the user's goals.
✔ Conduct the evaluation
Now
comes the most fun and complicated part. Sit alone (never evaluate
together) and go step-by-step through each interaction on each section
you have decided to assess. Interact with each element and see if the
elements violate any of the heuristics. I keep a sheet of paper in front
of me with the definitions (and examples) of each heuristic. Give
yourself a few hours to properly evaluate.
If you are conducting a heuristic evaluation of a full product, it may take one or two days. There are a few ways you can record the heuristic evaluation. Regardless, I always include annotated screenshots that visually highlight the violations. Here are some ways I structure the review:
- A Google Sheet with each interaction I am evaluating and a list of heuristics per evaluation
- A Google Doc with each interaction and the violated heuristics
- A Google Doc with each heuristic and the interactions that violate that particular heuristic
✔ Analyze and summarize the results
Bring
together all of the different evaluators and their findings. Add up the
number of times an issue occurred across evaluators and the average
severity of each violation. The most frequent problems, and the higher
their severity, the more prioritized they become. For instance, if each
evaluator encountered a problem with the search field, and rated it as a
major violation, that issue should get a higher priority than a
cosmetic or minor issue.
Overall, what you want out of a heuristic evaluation is a clear list
of usability problems, which heuristics they violate, and how severely
they impact the user. With this information, designers can make quick
and informed changes to improve the experience—especially when your team doesn't have the resources to conduct more in-depth studies.
Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs.
To get even more UXR nuggets, check out her user research membership, follow her on LinkedIn, or subscribe to her Substack.
Subscribe To People Nerds
A weekly roundup of interviews, pro tips and original research designed for people who are interested in people