The requests aren’t slowing down anytime soon. There’s more research that needs to be done. There aren’t more researchers being hired to do it.
It’s the challenge that UX teams face at orgs of all sizes. And it’s the challenge that Cordelia Hyland faced when she came onto Thumbtack.
There, she and a team of three taught 200+ employees research strategies applicable to their own work and goals. In the process, they cleared up bottlenecks and created an effective, efficient research process across the company.
We talked with Cordelia about the Thumbtack team’s tactics for training their organization, and why we should pause before we advocate for “research democratization.”
dscout: For a small team, user research at Thumbtack has covered a lot of ground—both geographically and methodologically. What do your processes look like?
Cordelia: On Thumbtack, you can find and hire local pros for pretty much anything; plumbers, wedding photographers, house cleaners, landscapers...Within each of these categories, pros have specific criteria for the jobs they’re interested in and able to do.
Because our service is nationwide, a fair amount of our work is remote research. We're a two-sided marketplace of customers and professionals. And we’re based in the Bay Area, so for us it’s really important to get out of the Bay Area bubble.
For field research, we rely on a lot of co-creation exercises. They're conditional on “the jobs you want, the jobs you sometimes want, and the jobs you never want."
Our big recurring question is, “How do we design a scalable system that understands those conditional cases? And how do we understand that when we have so many different service categories—from home remodeling to event planning—each with different information needs?”
Before Thumbtack, I worked as a team of one; I had to think thoughtfully about the way I resourced my organization, so I generally worked on a consultative model.
And then when I came to Thumbtack—as one of three researchers who’d joined the company within the span of a few months—we realized that we had an opportunity to really rethink what and how we researched. So we took on a bunch of different activities to define the practice.
At the time, we didn't have a dedicated research manager. So it was the group of us asking the question, “How can we a) develop a point of view on the kinds of research we do and don't do and b) streamline and standardize the process of working with research for our stakeholders?"
You were trying to democratize research.
To an extent. I hesitate with that term because democratization of research can look like giving everyone the basic toolkit and letting them run with it. I think that's the way a lot of dubious research happens. Leisa Reichelt over at Atlassian has a great perspective on Dollars to Donuts.
A lot of what we thought through was: What tools and skill sets do we want to share with non-researchers? How do we help our stakeholders understand what good research looks like so that they can go out and do some things on their own? What's research that they should partner with us on? When do we suggest that a project is better led by the research team?
First, we created templates for study planning, participant recruiting, and sharing out flash findings, to standardize the quality bar both for ourselves and others. We also overhauled our research ops processes like participant recruiting and incentive administration.
Then, we started doing research training.
We’d teach what kind of questions we felt teams could tackle on their own. We’d give examples of questions that we should partner on. We asked people to think carefully on whether working with us is the best way to get the answer.
Initially, we found that a lot of the research requested was pretty late-stage evaluative research. A team would be building something and ask us to come in and do a usability evaluation of it—which didn’t feel like the highest impact way for us to use our research skills. A good example of where it would be appropriate for a researcher to lead research would be when my colleague, Jordan Berry, researched the question, “As a professional, what is the information needed from a customer to arrive at providing a quote or estimate?”
That’s a big and squishy question with a lot involved. So she led field research at different sites around the country, and worked with professionals in various industries to understand the answer to that question and design a system that could help.
Over half of Thumbtack research was led by “non-researchers” last year. That must’ve been a pretty big scaling undertaking for just the three of you. What was your biggest hurdle?
Identifying who our different audiences were and tailoring the training to them.
For example, we knew our designers were partners who were well positioned to actually lead tactical evaluative research. They were already familiar with research and were our closest collaborators. So we focused their training on things like how to run a study.
For different audiences like, say, product marketers, we took a step back and asked them to look at what research is. How is it different from other ways of gathering feedback? What does a good question for research look like and how can the team help you?
It isn’t rocket science. It’s partnering with stakeholders to frame and reframe the questions they have… It’s inviting them to partner with us on study planning, observing, and note-taking. And it’s easy things, like sharing credit with them on reports so that they own those insights as well.
How do you empower a team to become more invested in research?
It’s worth stressing that our story is the story of what’s been working for us as an organization of our size and stage of research maturity. But I think most researchers know that they need to pull in their stakeholders.
The way we do it isn’t anything new. We’ve scaled ourselves by scaling our process.
It’s partnering with stakeholders to frame and reframe the questions they have into ones that appropriately scoped for qualitative research. It’s inviting them to partner with us on study planning, observing, and note-taking. It’s easy things like bringing them into the synthesis process and sharing credit with them on reports so that they own those insights as well.
Your team conducted a workshop at UXPA 2019 on scaling research. What happens when a company or organization doesn’t appropriately scale their research efforts?
The dangers are that research gets pigeonholed as only being able to provide value for really limited evaluative work. Also, when research is brought in later in product development it potentially leaves a lot of impact on the table.
For example, the product team has decided that there’s an issue or that there's an opportunity. Then the designers go and design the whole thing. And just when we’re ready to launch it, then we bring in research. That's a risk. That limits the impact that our research team can have.
Because it’s far too late in the process for research to get involved.
Yes, it's too late and it's too limited. And oftentimes what you end up providing feedback on is whether or not users could navigate or understand the prescribed solution.
But it's not starting with the user, who they are, and what needs they have. Research is most impactful when it's right at the beginning, and helping determine what direction teams should explore.
Of course, it’s also integral throughout the product cycle—beginning, middle, and end.
Since we started leading these workshops and providing support to non-researchers, we’ve seen a big jump in terms of research happening. Within the last year…half of the studies happening within Thumbtack are led by someone for whom research wasn’t their primary job—which is really incredible.
What would be the biggest piece of advice for an organization approaching the process of scaling?
There's no one-size-fits-all solution. But it starts with understanding where the organization sees research fitting in and seeing how you can fulfill that need better.
Also, it’s about showing the value of research so the research team can determine where they can be support. For us, the team really wanted evaluative research. So we don’t say, “We’re not doing evaluative research,” because that’s not partnership. There is a need there and fulfilling it is valuable.
Instead, we think how can we speed up and operationalize evaluative research practices so that it can happen as efficiently as possible. And how we can train others to do that kind of tightly scoped, tactical work while we focus on bigger, more foundational, strategic questions that require more complex methods or analyses. Questions other teams didn’t know research could answer.
One big lesson that came out of designing and running that research training program was helping others understand the expertise that we can provide. A surprise hit of the program was a deep dive into cognitive biases and how they impact our research participants and us as researchers.
People loved that because it helped them reflect on where that issue might have come up in past research that they participated in or led. It let our stakeholders know they can rely on us for rigorous study design.
How did you ensure that people left the training confident they could implement what you taught them?
We tried to give people something really actionable to take away. At first we went in and would say, “We’re going to teach you all to do research.” And then we realized that depending on our audience, they might never conduct their own studies.
For our customer support agents for example, we don’t talk about study design. Instead, we talk about cognitive biases, laddering questions, and principles of digging to get at underlying reasons. Then we show examples of how that can provide really actionable feedback.
So rather than saying “A customer said this feature was hard to use,” they can focus on what the customer was doing, and the problem they encountered. Thinking about how our researcher skills translate to a support conversation so that our agents can provide specific and actionable feedback to product teams.
It was a matter of making sure that we were giving each workshop/training audience a gift that was actually usable for them.
How did your team go about finding out what each person should walk away with?
How we started—and how we eventually learned we needed to readjust—was with our product marketers. These are people hungry for insights, but the insights they look for are pretty different from our product designers.
Marketers think about the value prop of a product, what the user need is, and how the user talks about it in their own words. They need that information so they can write about the features in a compelling way and connect to users.
That’s pretty different than typical designer questions like, “How could the user successfully navigate this flow?”
So we went into our very first training session with three methodologies, that we pulled together specifically for the product marketers about testing value props. And nobody came up to me afterwards and asked me, “Hey can you please show me that emotional map for understanding our users’ competitive landscape?”
What they wanted was answers to their original questions: how can we understand the research you’re doing and how can we make sure you’re asking these questions for us?
We realized we were giving the wrong gift. What the gift needed to be was answering the question, “What are the limitations and benefits of qualitative research for informing product marketing?”
We tried to give people something really actionable to take away. It was a matter of making sure that we were giving each workshop/training audience a gift that was actually usable for them.
How have the results been so far?
We’re so happy with our results. The biggest proof point for us is that when I joined, we were a team of three researchers figuring out how to build a practice on our own. Now we have an incredible research manager, Rannie Teodoro. We have a full-time research coordinator who's allowed us to go so much faster by handling all of the research operations. And we're hiring additional researchers and interns. So we’re seeing a lot of leadership buy-in when it comes to the team since showing how we’ve scaled research capacity and impact.
Since we started leading these workshops and providing support to non-researchers, we’ve seen a big jump in terms of research happening. Within the last year, nearly a hundred studies have been conducted, about half of which have been led by non-researchers with research advisement.
That means that half of the studies happening within Thumbtack are led by someone for whom research wasn't their primary job—which is really incredible.
So we really were able to give a research lens to a lot of people. And through the trainings we've helped people understand how researchers do what they do and how non-researchers can answer questions on their own.
We think about the impact as growing our organization's capacity for conducting and consuming research.
Tony Ho Tran is a freelance journalist based in Chicago. His articles have appeared in Huff Post, Business Insider, Growthlab, and wherever else fine writing is published.