April 29, 2021
April 29, 2021
I am guilty of saying how much easier usability testing is than any other user research methods. We talk about usability testing being an essential and foundational skill—one of the first concepts you should learn as a researcher. Compared to generative research and ethnography, usability testing is a breeze.
That’s true to only a certain extent. While it’s easier to read a list of tasks and pointed questions than it is to have an open script (like for generative research), that doesn't capture the entirety of usability testing. Usability tests are more simple to run, but that doesn't mean they are easy.
Writing usability testing tasks is still one of the most complex parts of the research process for me. Getting these tasks right is imperative as they color the rest of the study and dictate how good your data will be.
I used to write usability tests quickly and not give much thought to the content. I would throw them into a prototype without much context (I thought this was the right way to do things) and give little to no instruction.
The participant would be trying to answer my questions based on little knowledge or understanding, which does not reflect real-world usage. These sessions would sometimes end with the team more confused than the participant and leave us without much insight.
I knew I had to change something, but I didn't know what to do or how to do it in a way that still produced valid and reliable data. Eventually, I got my bearings and created a process for writing solid usability testing tasks.
Before I foray into the world of constructing usability tasks, it is crucial to understand what usability testing tasks are and when it is helpful to use them.
We aim to understand how someone uses an interface by measuring efficiency, effectiveness, and satisfaction with usability testing. We can gauge these metrics by having people attempt realistic tasks on an interface, whether that be a prototype or a live product.
You ask participants to complete these tasks and observe what happens when they try. Usability testing in this way can help us understand if people can complete a task (effectiveness), how long it takes them (efficiency), how many errors they encounter (effectiveness, efficiency), and how they feel after using the product (satisfaction).
Not every single usability test needs to be like this, so it is important to decide if you need to go through this process. I have some sessions where I wanted feedback on concepts or wanted the participant to explore the prototype to see what they would do without direction. For these instances, I didn't use usability testing tasks.
However, I have started using these tasks more frequently as they help find problems, quantify them through metrics, and understand how big the problem is. To get these measurements and valuable insights, we need to be mindful of how we run the study and the types of questions we ask. We need to ensure we are writing tasks that don't bias the participant or skew the data.
Throughout the years, I have honed this skill through a lot of practice and, if you are looking to write great usability tasks, that would be my first piece of advice—practice, practice, practice. These are the steps I go through when constructing my usability testing tasks:
*A common question I get is how many tasks should be in one usability test. It depends on the complexity of tasks and how much time you have with the participant. For a 45-60 minute session, I generally include five to seven tasks. By including a dry run in your process, you can get a good idea of how many tasks you can fit into the session.
I see a few recurring mistakes that pop up during many usability tests I observe. I have to actively tell myself to avoid these mistakes (which is where practice comes in) as they are easy to slip into. Keep these in mind when writing your script and during your practice run:
All of this theory can seem straightforward until you go to write tasks. I seemed to hit writer's block constantly when trying to combine all these steps. If you are hitting that same wall, here are some examples to help you breakthrough.
I work at my favorite company, Dog Wishes, which helps rehome dogs from shelters by offering starter kits, training sessions, and subscriptions to help new dog parents adjust and excel.
Task goal: Find a trainer session to help them with puppy separation anxiety
Task scenario: Your puppy is displaying separation anxiety symptoms (chewing, barking). You want to find a trainer who is available in New York City on May 7th, 2021, who specializes in separation anxiety and charges less than $75 per hour.
Task goal: Purchase a food subscription for puppy food
Task scenario: You just purchased a puppy, and you are looking to purchase dog food. You want to find and buy a monthly subscription for the Fresh Dog Food for puppies.
Remember your participants and audience! Just because your tasks are realistic doesn't mean you are already there - you need to make sure you recruit the right participants for the study to get the best data and insights!
Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs.
To get even more UXR nuggets, check out her user research membership, follow her on LinkedIn, or subscribe to her Substack.