October 9, 2023
October 9, 2023
Let's quickly see if people can "do stuff" on our product.
All you have to do is write some tasks. Easy peasy…right?
Usability testing and task writing can feel deceptively easy. Trust me. I've fallen into the trap of quickly writing usability tasks that confused participants and led to useless data that was entirely unhelpful to the team.
There is an art and science to crafting usability testing tasks that ensure your participants get the necessary information, and your team gets the information they need to make better decisions.
In this step-by-step guide, we’ll review all the components that go into writing an excellent usability testing task and how to do it.
Before diving into the how-tos, paying attention to something often overlooked when it comes to usability testing is essential.
What is a usability testing task? We use the word as if we know exactly what it means. And, while task is a common word, what does it mean in the context of usability testing?
A usability testing task is a series of steps the user has to perform to accomplish a given goal.
This definition is hugely important to pay attention to, because one of the biggest mistakes I see in usability tests is letting a participant land on a page and explore it with no goal.
It’s rare for people to go to a product or service with no goal. How often have you gone to a website homepage just to explore it? How frequently have you visited a product or service with no goal?
When we don't have clear goals for our usability test tasks, even if they are super open, we end up with data that can be biased and unreliable. The information you get from participants might not be realistic for their real-world experiences.
So, our tasks have to have relevant goals.
Usability testing is about observing and sometimes measuring participants completing tasks on a product or service. To get the most relevant data, your tasks should be as close to real-world experience and usage as possible.
For example, if I love to read and am looking for a new book and land on your product, I’m likely there to browse and purchase a book. So, suppose you ask me to do some auxiliary tasks like finding and signing up for your newsletter, finding careers available at your bookstore, or looking up your special bookstore credit card. In that case, you won't get the most essential data from me.
Sure, there might be a use case for those tasks, but depending on the persona (or group of users) you are optimizing your experience for, you must pick tasks carefully.
How do you pick the best tasks for your usability test?
The first step in picking the right usability tasks is knowing who your users are.
Like the example above, if I came to your website to find and purchase a book, you would want to ensure you tested my particular flow. However, if I were seeking a career at a bookstore, I would have a completely different use case.
If you have different personas or groups of users, pick who you’re targeting with your usability test, because it will allow you to create tasks tailored to their specific needs and goals.
Based on who you’re testing, list the potential tasks your chosen user group might perform while interacting with your product or service.
The best way to create this list is by triangulating data from:
Using the book website as an example, you might list out tasks such as:
As you can see, there are many tasks on the above list. Not all tasks are equally important, so defining who you’re talking to before writing your usability testing tasks is essential.
When picking tasks, I recommend choosing the critical tasks for users to complete that get them to achieve their primary goal.
So, if my main goal is to come on to your website and purchase a book, then the essential tasks for me would include:
Depending on if I knew which book I wanted to buy, you could also have me browse by genre or popularity.
In terms of how many tasks you should have in your usability test, it depends on how long you have with your user and how complex the tasks are. If you have 60-90 minutes with a user for a moderated test, aim for 10-15 tasks. If you have less time, you’ll have to cut the number of tasks.
Now that we understand how to pick the top tasks we want the user to go through, it's time to write them!
Before we get into writing the tasks, let's look at what we should avoid when writing our usability testing tasks.
I see a few recurring mistakes during many usability tests I have observed. I have to actively tell myself to avoid these mistakes (which is where practice comes in) as they are easy to slip into.
If you’re trying to get users to do an action and those words are in your interface, don't include them in your task. Using words in your interface makes tasks easier for participants as it leads them to the correct answer.
Instead, use synonyms of the words. If you’re trying to get someone to subscribe to your newsletter, ask them, "How would you get more information via email?"
I love fiction writing and am guilty of this mistake. Sometimes, I get carried away with my scenarios, and suddenly, the participant has been leading this unbelievable life that has brought them to this product.
The participant has to read through the scenario details to complete it, so including elaborate details that aren't conducive to the task can skew your data. For example, I was asking someone to demonstrate how they would purchase train tickets in the past.
I came up with this scenario: "They wanted to go on holiday to Spain to sit on the beach because work and life were stressful, but they couldn't find the perfect connection, etc." Although most participants laughed at the scenario, which might have been relatable, it took us out of the real-world situation.
I have made this mistake several times. Avoid offensive details like religion, weight loss, politics, and health. Instead of making the scenario about the participant, make it about the subject.
Avoid scenarios of "blaming" the participants or putting them in an awkward situation. Also, consider that you don't know the participant's history, so try to avoid holidays such as Mother's Day or Father's Day, as these could also be triggering to participants.
There are usability tests I have observed where the person gives some added "content" to what they want the participant to do. For example, telling the participant to choose the option that gives them an "awesome discount." Try to keep your words and descriptions as neutral as possible.
As we've seen, an essential part of a usability testing task is having a goal, but what other components go into writing an effective usability testing task?
When I write and teach usability tasks, I use the following formula:
Action Verb + Object + Context + Goal + (Optional) Constraints + Endpoint
I use this formula for each task I identified and prioritized in the above list. So, let's build some examples based on the prioritized list:
You love reading horror fiction and just heard Stephen King recently released his new book, "Holly" (context). Using www.barnesandnoble.com (object), search for (action) his newest book.
You've decided to purchase "Holly" (context). Using this page (object), add the book to your cart (action).
You're ready to complete your purchase (context) of "Holly." From your cart (object), begin the checkout process (action), but please stop just before entering payment details (constraint).
With this formula, you can easily ensure you give the participant relevant context and that your usability tasks have a goal and intention.
Let's look at a few more examples outside of Barnes & Noble.
Task: Using the website www.asos.com (object), imagine you are looking for a red summer dress for a wedding in a month (context). Find the most suitable dress for this occasion (goal) and add it to your cart (action), but do not make a purchase (constraint).
In this task, you could give additional context, such as size, price, or style, if you wanted to see the participant play with filters.
Task: You would like to receive notifications for new meditation exercises that come up (context). Locate (action) the area in the Headspace app (object) that will allow you to change your notification preferences (goal).
Task: You've just received an email with a Google Doc link (context). Click (action) on the link (object) and give your colleague the feedback that the date on page three needs to be changed to October 31st (goal).
Now that we've seen some suitable tasks, reviewing bad tasks, why they are bad, and how to rewrite them is always helpful.
Task: "Please test our website at Airbnb.com and tell us your thoughts."
Why it's bad: This task is overly broad and lacks the most crucial part of a task: the goal. It doesn't give any context or direction and could lead to vague and unreliable data.
Improved task: Imagine you are planning a weekend trip to New York City with a friend from October 28 - October 31 (context). Use Airbnb.com (object) to find (action) an apartment to rent (goal) that is under $200 a night (constraint).
Why it's better: This task touches on the crucial components we covered above and provides the participant with a clear goal and context so they can complete the task.
Task: "Click around the app and see if you encounter any problems."
Why it's bad: This task lacks context and any sort of goal. Users don't come to your product or service to click around and find problems (most of the time, at least).
Improved task: You are creating a new dating profile on our app (context). Open the Bumble app (object) and complete the basic onboarding information, including uploading a profile picture (action).
Why it's better: The improved task provides a precise scenario and a specific user goal, ensuring the participants know what they are doing during the task.
Task: "Test the checkout process on our site."
Why it's bad: This task doesn't specify which actions the participant should take during checkout or give any context. Users don't go to products or services to test them!
Improved task: You've added a laptop and a mouse to your cart (context). From this screen (object), complete your purchase (action). Please stop right before confirming the payment (constraint).
Why it's better: The new task provides a clear context, a defined goal (completing a purchase), and specific instructions on what the participant should focus on—resulting in more actionable feedback.
There are a few key differences between qualitative and quantitative tasks, and they mostly come up in the structure and flow of the tasks.
With qualitative usability testing, you can:
Example of a qualitative usability testing script:
Introduction: Hi, I'm Nikki! Thank you for participating in this research session for BestBuy.com. We want to understand your experience while interacting with certain parts of our website. I will give you a few activities to do, and while you are doing them, please think out loud so I can understand your entire process! Remember that this isn't a test; we genuinely want to understand your personal experience. Let me know if you have any questions.
Task 1: Finding a 55-inch smart TV
Scenario: Imagine you are in the market for a 55-inch smart TV. Your budget is $600. Using BestBuy.com, find a 55-inch smart TV within this budget.
Follow-up questions:
Task 2: Checking the return policy for TVs
Scenario: Before you decide to purchase the TV, you want to check the return policy of the TV you're interested in. Using BestBuy.com, find the return policy for TVs.
Follow-up questions:
End of session questions:
As you can see, in qualitative usability testing, you look for the participant's constant feedback through their thinking out loud and explaining their thoughts. You can probe during the task, asking them why they performed a particular action or clicked on a specific area and, after each task, you're asking follow-up questions to understand their experience better.
On the other hand, with quantitative usability testing, you want to:
Here is that same script geared for quantitative usability testing:
Introduction: Hi, I'm Nikki! Thank you for participating in this research session for BestBuy.com. We want to understand your experience while interacting with certain parts of our website. For the next 60 minutes, I will ask you to perform five different activities. I will give you all the relevant information you need for the activities. You can tell me when you're done with the activity, and I will ask you to rank your experience. Remember that this isn't a test, so there is no one right way to do anything. Let me know if you have any questions.
Task 1: Finding a 55-inch smart TV
Scenario: Imagine you are in the market for a 55-inch smart TV. Your budget is $600. Using BestBuy.com, find a 55-inch smart TV within this budget.
SEQ: Overall, how difficult or easy was the task to complete?
Task 2: Checking the return policy for TVs
Scenario: Before you decide to purchase the TV, you want to check the return policy of the TV you're interested in. Using BestBuy.com, find the return policy for TVs.
SEQ: Overall, how difficult or easy was the task to complete?
End of session UMUX survey:
Follow-up questions:
In the quantitative usability test, we ensure the user is focused on the task so we can reliably measure any metrics you defined, such as time on task or task success. If you have follow-up questions, ask them at the end of the session!
One common question I always get asked is if users should pick their tasks to make the scenarios as realistic as possible. If this is your goal, I recommend conducting a walk-the-store interview to see how participants have used a product or service based on a real-life scenario.
This approach will enable you to see how they use a system "in the real world" and watch the participant work more "naturally" compared to the scenarios and tasks of usability testing.
However, you can still capture that evaluative component in Walk-the-Store interviews. You are watching them complete their tasks and where they run into pain points. When you see this pain point, you dig in to understand it better, as you might in a qualitative usability test.
The best thing you can do is practice writing usability tests with the formula above and conduct dry runs with your colleagues—that is the number one way I built my confidence as a usability tester, and how I teach others usability testing! It takes time to master the science of the usability task, but with time, you can become an expert.
Interested in seeing how dscout can help you get started with usability testing? Check out our new features on dscout Express.
Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs.
To get even more UXR nuggets, check out her user research membership, follow her on LinkedIn, or subscribe to her Substack.