Skip to content
Conversations

The Meta-Discursive UX Practice

Taylor Klassman on the reflexivity of researching researchers and the trends she's seeing across an in-demand and maturing field.

Words by Ben Wiedmaier, Visuals by Thumy Phan


UX researchers often have intersections with the products and experiences on which they work. They're users of the platforms, fans of the brand, and make use of the services. Taylor Klassman is the in-house UX researcher for dscout, a remote experience research platform.

This position and role offers her a very unique standpoint: a researcher conducting research with researchers to improve the experience of a research platform (exhale). We sat down with Taylor to unpack the nuances, opportunities, and ongoing learnings she's gleaned from this meta-role and what it might mean for the practice broadly.

dscout: As a team of one, you cover a lot of ground. What kinds of work and questions have you supported to-date?

Taylor: Since I'm a shared resource across all of our product teams at dscout, my time is split amongst them and I do a handful of projects for each dscout product every quarter. The projects range in depth and time commitment depending on the nature of the studies.

I work on sprints that align with our development team and it’s a combination of generative (what's the problem space) and evaluative (how might we best solve for it). The topics run the gamut. I like doing generative research more as I'm sure many researchers would say, because it gives me a chance to really understand a particular space—usually understanding a very, small part of the research journey, in an in-depth way.

Because I'm a team-of-one, I've been putting more time on democratization across my stakeholders: product, design, customer success, and engineering. A lot of my time has been focused on creating the groundwork to make that successful and beginning to train folks on research best practices. Honestly, much of my time lately has been setting up the scaffolding to scale the practice here, both the work I'm already doing and projects I can't yet get to because of bandwidth.

Informally, I'm also a resource for teams like sales and marketing who might want to check language and resonance. They might check with me to ensure their talk tracks, collateral, and other communications are accurately representing the research capabilities and in a frame that will make sense to our audience.

What does that democratization practice look like today at dscout?

I would say the research practice is very democratized within the product team—the designers now own probably 90% of evaluative research that we do and it's partially because that work is clearer in structure and gives them a chance to evaluate their own work. It's important to collaborate, certainly, but they're leading the charge with setup and question/task development; it's more plug-and-play with the templates I'm working to create with and for the design team.

The work that I’m sort of holding onto is generative research or really early product strategy, the more formative work. The goal of democratizing evaluative research is that I then have more time to work on more foundational work, more product strategy work.

I'm really excited to do that, which is why I feel OK relinquishing some of the evaluative work...but it is sad. It is sometimes hard for me to keep my mouth shut, too. I'm not completely removed yet because I'll find myself saying, "I don't think you should do that." I think there's still going to be some consultation that I'll do with the designers. The upskilling and mentorship is ongoing.

There's also all of this lowercase "r" research that we do at dscout that I think is pretty unique. Our sales and customer success teams talk to our clients every day and the feedback that they share with us becomes an important input to the work that I do. So it's research in the sense that I do secondary research on what they've brought to us, but it doesn't have the label of “research” on it in the same way that the work that I do does.

With our success team, the researchers and account managers lead all of the debrief sessions after a project is done. I think in a world somewhere, that could be work that's led by a UXR and I'm sure there are plenty of organizations that do that. But I think because dscout is so research-minded, it makes sense for those account-facing teams to run those sessions. We trust that they're going to bring the feedback back to our team for us to use as an input.

The data from sales and customer success gauges how strong the wind is and what direction it’s coming from for a potential project. We then sort of use that to set boundaries and a path forward. Using the frequency of the feedback as a bellwether for movement as well.

We take stock of and inventory all user feedback collected by sales and success, but on a scrappy team we need to prioritize and act shrewdly. So if something is being brought up a lot, I add it to my discovery roadmap and start conducting secondary work on it. Then, if it aligns with internal goals and priorities, I might run some primary research on the topic. The input from customer-facing teams is invaluable to keeping that roadmap fresh.

Customer-facing stakeholders are critical to me when I do decide to conduct research. They're who I bounce protocol ideas off of and gut-check directions with because again, it's just me and they're so much closer to the customers who I ultimately want to solve for and help with my work. They have a great sense of who my participants should be for any given study.

Democratization has helped me feel like I'm on a bigger team, even though it's still only a team of one. The involvement of other teams also creates an incentive to continue helping, because they see me working on the issues their customers are reporting and then they get to report on the good news (when we squash a bug or create a new workflow), which strengthens the relationship between the clients and us.

You recently conducted research on analysis in dscout. Here we get to the nuances of your role: Investigating a critical aspect of research for a research platform. What was that like?

I really liked the generative work that I did that led to the tagging updates for two reasons. One, analysis is such a big part of my job, but it's not an area of focus for dscout. I think we're getting there but, we are still developing robust analysis tools for our qual data inputs. Our platform is fun to use for collecting data, but there can be a moment where customers (who are researchers, too) sort of look wide-eyed at the data and can feel overwhelmed with where to start. So it was really fun to be given the space to say, "No, this is a huge part of the researcher user journey."

Even as a researcher, I have plenty to learn about best-practices...it's a little bit like the wild west. There are tools that you can use to help structure how you analyze but that's like where researchers' personalities really come in.

Personalities can also come into contact with organizational and stakeholders preferences for speed, outputs, etc. Their companies and teams have strict, pre-set or deductive tagging frameworks while others are free to organize and make sense however they see fit, more or an inductive approach. The tagging functionality has to support both of those use cases. I feel like that tagging work was the first time where I was like, "Whoa, researchers have diverse personalities with their work and this is a place where we clearly saw that." And that's a really hard thing to solve with the tool because of that nuance and diversity in approach.

...and do you ever lift or borrow ideas from your participants because—again—they're researchers like you?

In the analysis work, I've definitely tried out what I’ve learned and incorporated a couple of things—this is super relevant to how I use dscout. I've talked with over a hundred researchers at this point and the way that they make dscout work for them is so cool. I've definitely co-opted what they've done and what I’ve learned from them for my own work.

For example, even though photos and videos are hallmarks of our platform, I don't typically leverage those kinds of questions. I have definitely learned from internal folks and our clients about the power of using a video question. It doesn't just have to be something that could have been an open-end; you can do a lot of really interesting things with video questions that aren't just making it easier on dscout by not having them type it out.

I'm still learning a lot about the versatility of photo/video prompts. I was initially concerned about the participant’s staging, filters, and other elements in their responses to a visual question. It can be difficult to gauge whether or not they feel comfortable sharing and when they're most likely to give a longer vs. shorter answer. But a video question opens the space for the participant to influence their data in a really powerful way that I don't think I was fully aware of before now.

I think the biggest output benefit is the highlight reel that you can use with video questions. Research participants can be so authentic with their responses that even the best writers aren't going to be able to describe exactly how they're feeling the way that their face can.

You're able to analyze their words, body movements, and expressions. You almost have three pieces of data that are concurrently coming at you to make sense of, but that complexity makes for richer data, so it's worth the effort. I'm starting to really lean into video for those reasons.

Democratization has helped me feel like I'm on a bigger team, even though it's still only a team of one. The involvement of other teams also creates an incentive to continue helping, because they see me working on the issues their customers are reporting and then they get to report on the good news (when we squash a bug or create a new workflow), which strengthens the relationship between the clients and us.

Taylor Klassman
User Experience Researcher, dscout

How does the environment, the context of a research company affect your work? Is the literacy of "research" among your stakeholders something that affects your work in any way?

We definitely do have that literacy and it is a bit of a double-edged sword. People have more of a cursory or underlying understanding of research as a concept and process because of the interactions with customers, but maybe not a ton of practice researching.

They understand a lot of the vocabulary and that's really helpful in the trainings that I've done; some things I can just kind of gloss over: types of research, stages of research, concepts like that are gleaned from working with our customers. Folks at dscout interested in doing their own research are eager to learn how to do it "right" and that can lead to a feeling of imposter syndrome, especially if "research" isn't in their title.

In those moments of uncertainty, I can really educate on the odds and ends that only a front-line practitioner running sessions and designing studies would experience. Like, forms of bias to be on the lookout for or the role of pauses and silence in encouraging participant responses. Trust but verify responses using other prompts—these things that most practicing UXRs know are the sites where I am educating most often.

This is the thing about qualitative research, you are asking humans to report back to you about something. Anytime you're asking somebody to report back to you, whether it's about themselves or how they interact with the world or whatever it is, it's like looking at Instagram; they get that second to consider their response and that's something to expect.

I think that's a hard truth to learn as a researcher because you're trained that it's on you to build the trust, that you're the one who's untrustworthy. But qual is a two-way street and that mutual trust and openness can be very hard to establish, especially in a short amount of time.

Academic rigid research is absolutely important and we need it for a lot of the spaces that it exists in. But, why not? When you're evaluating a UI where nobody's life is at stake, try something crazy and see what kind of data you get back and what you can do with that data.

Taylor Klassman
User Experience Researcher, dscout

What are some UX discipline-wide trends you're seeing?

The trend is really the diversity in approach and how folks' personalities come through across parts of the process. UX research draws from so many different places and that diversity is evident in the way two researchers might approach the same question. This comes back to this idea of researcher personalities—not personas for dscout, but more proclivities for the work they do regardless of the platform or tools they use. Those moments when it's not standardized that shines through.

Academic rigid research is absolutely important and we need it for a lot of the spaces that it exists in. But why not? When you're evaluating a UI where nobody's life is at stake, try something crazy and see what kind of data you get back and what you can do with that data.

For example, there's a subset of folks who are very committed to the "by the book" ways of conducting research. It’s not that they’re not creative, but they like the rules as they were explained to them and enjoy the systematic process of repeating across questions. I'm not that way at all. I'm a lone wolf, scrappier researcher who stretches those "rules" even when doing so might not be methodologically sound. And I think that's a reflection of the diversity in backgrounds. You might have a PhD and someone who used to work at a coffee shop on the same team—and how they approach a question will likely differ.

I think that's the appeal of UX: it's still evolving and maturing, so the creative license and ability to play is still there. We're not in an academic lab (most of us) and so we can be flexible. Some UXRs are also creating enjoyable experiences where the outcome is fun or happiness (along with session usage), and the stakes feel different for that kind of work.

The personality might come through in the toolkit they "carry," the approaches they're used to leveraging. For example, I know some researchers who would never do a card sorting activity, not because they don't think the method is sound or they aren't physically able to do it, it's just that isn't their style. I think there are definitely hard-and-fast rules about when to employ certain methods, but it's outside of those hard-and-fast rules that the style shows.

It's still up to the researcher to say, "No, I think it would be really cool if we did a moderated session where the participant is co-creating the prototype with us." Lots of researchers will be like, "Hell no, I don't want to do that. It's hard to organize and the data that comes back is weird." But some researchers, who maybe are more design oriented, would gravitate towards something like that. I definitely think how researchers choose their methods and then what kind of spin they put on that method totally shows that researcher’s personality.

There are times where I wonder about the future of UX as design and product departments are starting to take on more and more research responsibilities. There are designers who are very well-trained in evaluative research and then there are strategic researchers focused on workflows and on higher level questions. The latter group might be folks with a market research background—who know how to "sell" things internally to/with stakeholders—and academically-trained researchers—who have the quant rigor needed to execute strategic work requiring larger sample sizes. Again, that's my reflection on today's UXR, who might be thinking strategically and who might not have the bandwidth or resources to sample large amounts of people.

I don't know exactly what the changing industry means for the future of UX, but I think in some ways it's going to get more rigorous but not necessarily because it's becoming more academic. I do think in the same way that design has gotten more rigorous and there's more design ops that fuels how each individual designer works. These evolutions might create some friction, but that's the hallmark of a maturing, burgeoning field, which I'm excited to be part of as it grows.

Ben is the product evangelist at dscout, where he spreads the “good news” of contextual research, helps customers understand how to get the most from dscout, and impersonates everyone in the office. He has a doctorate in communication studies from Arizona State University, studying “nonverbal courtship signals”, a.k.a. flirting. No, he doesn’t have dating advice for you.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest