If you're in a user research role, my workload last month might sound familiar.
I was leading a Jobs to be Done initiative, while running two (vastly different) usability tests. I pulled together 10 one-hour sessions within a week, and had 10 days to wrap the Jobs study—from first interview, to final report.
In the end, I conducted 26 sessions within 12 business days. And I needed reports ready to go within that same timespan. Teams were relying on my input to make decisions, and they needed write-ups fast.
While this sounds like a UXR's dream—having your company rely on your research to make decisions—it can quickly turn into a bit of a nightmare, especially if you're a research team of one.
As the role of user research grows within an organization, researchers find their time increasingly in demand. Before, we were left to argue the value of user research—working to convince colleagues of its worth. We fought to explain how simple and informative it is to include research in the product and tech process.
Now, our organizations see the value. They want user research on their side. We’ve succeeded in convincing others of the importance of research and we’ve run into a new set of workplace hurdles to hop.
If you’re like me, you’re learning to juggle research at scale (and battling cognitive dissonance and imposter syndrome while you’re at it). Here’s a case for slowing down, and a few techniques you can use to convey that case to your stakeholders.
The problem with research at scale
The biggest problem with user research at scale is that user research isn’t really designed to be done at scale. Like any other type of scientific research, it's a process. You certainly don’t see lab scientists scrambling to get reports out to meet a decision-making deadline. In fact, rushing lab science would probably lead to catastrophic consequences.
I know we all have a time-imposed limit to our work. That's part of every industry. But the onus is on us to advocate for a user-research process that’s fair to what is actually required, as opposed to what the business expects.
There is enormous friction between the craft of user research and what business wants out of it.
Researchers want to move fast and make decisions quickly, but we also want to help colleagues internalize what users are saying. We want them to see the product through the lens of a unique person. That takes time.
Sometimes, explaining this to stakeholders is a numbers game. I’ve audited my research processes and realized the following:
Each one-hour interview takes me about two hours to analyze. In the example above, those 26 research sessions took me 52 hours to examine. Then, I needed another five to ten hours to bring together all the information into a cohesive (but not too lengthy) report.
In total, that's 88 hours of pure research and reporting over two and a half weeks. This doesn’t include any other work or meetings I am pulled into, which generally adds up to ten hours a week, at a minimum.
When you’re already juggling a lot—being pulled between different projects, constantly switching contexts, and feeling the pressure of looming deadlines—”minimal reporting” becomes common.
Delivering insights that are concise, and that help your team make fast decisions, isn’t necessarily a bad thing. But the majority of user research, like all scientific research, does require a significant amount of time to be understood.
As researchers gather more and more demands for projects and reports, we have to question: how are we supposed to meet these requirements? And how are the teams that request these projects supposed to truly internalize the results we generate on their behalf?
When we’re asked to synthesize at the speed of light, user research becomes a way for teams to take a shortcut—to invent assumptions based on quickly made correlations, opinions, and quotes.
Can we do “lean” user research?
When we ask ourselves “why has the tempo of research accelerated?” we need to ask first, “how have our organizations changed?”
Lean and agile frameworks have become commonplace across tech companies, and user researchers have become increasingly embedded within lean and agile teams.
And while there’s some user research methodologies that can be accelerated to meet a two week sprint, much of what we do (or what we should be doing) doesn’t fit into that box. Moving on a fast cycle is excellent for engineers, but difficult for those trying to genuinely understand users.
Teams often think you go into a session and emerge afterwards with insights. They don’t realize the careful craft of deep listening, comparing conversations, looking for trends, evaluating your biases, and writing an impactful report. Without us taking time to adjust expectations, our stakeholders grow to believe we have a sort of “clarity gleaming” super power. But when we’re asked to synthesize at the speed of light, user research becomes a way for teams to take a shortcut—to invent assumptions based on quickly made correlations, opinions, and quotes.
As user researchers, we can be slightly guilty of creating these lofty expectations. I know I’ve pulled all-nighters. I've done minimal reporting to get insights out the door. After having put in so much effort to prove the value of user research, the last thing I want is to disappoint my team. And it gets easy to start believing “some research is better than no research at all.”
But as expectations continue to build, pushing against these patterns and processes will be necessary. Not just for us as researchers, but for the industry as a whole.
You have permission to do good science, and to use your incredible skill set to move your teams into the best direction forward.
How can we set expectations straight
As much as we might want to, we can’t force user research to be something it isn’t, and we sacrifice the integrity of our work when we can’t coerce it to fit a framework it’s not made for.
It frustrates me most when I see how much many of us take on as researchers and how quickly we can create (or exaggerate) our imposter syndrome by doing so. We wonder if others can tackle the workload better, or can turn projects around faster. We convince ourselves that maybe we can deliver more, if we only change processes, or get better at our jobs.
In reality, we simply need time for our craft, just like any other role deserves.
Here are my next steps for realigning expectations with myself and others:
- Make a list of what you need to do in order to perform well at your job, and attribute the amount of time each of these tasks takes, including all of the responsibilities outside of your direct role.
- Know how long it takes you to turn your interviews into a deliverable and put these numbers forward for each project. Explicitly list the number of hours for each project, versus the amount of time you have in a given timeframe.
- When receiving a last-minute, rushed project, be clear with what won’t get done if that project takes priority.
- Ask for more tools (or more of a budget) to aid in the more time-consuming tasks, such as recruiting and coding interviews.
- If colleagues don’t have the time to synthesize and help with the reporting process, have them attend research sessions to take additional notes. That way, you always get another perspective.
- Write up the consequences of when you can’t do your best work on a project. What are the implications for the business?
- Book personal meetings for deep thinking and analysis, which automatically cancels all invites trying to book over that time.
- Most importantly, learn how to say “no.” As researchers, we often wear many hats and, sometimes, we need to take them off to do our job effectively.
It can be daunting to go against the grain and stand up for what we need when we have tried so hard to simplify user research. However, we must respect our job for what it really is, not how we wish it would neatly fit into an external process. It is okay for us to take our time to conduct and analyze our research. You have permission to do good science, and to use your incredible skill set to move your teams into the best direction forward.
Nikki Anderson is the founder of User Research Academy and a qualitative researcher with 8 years in the field. She loves solving human problems and petting all the dogs. Explore her research courses here or read more of her work on Medium.