Skip to content
Ideas

GenAI Adds Tension to UXR—But Can Also Bring Opportunity

The researchers we spoke to who have experience in the field of AI sense the enormous potential and pitfalls ahead. Read some of their most astute insights.

Words by Karen Eisenhauer, Visuals by Thumy Phan

AI feels like a sudden storm. For many of us researchers, it seems like AI development went from being a niche product that few of us dealt with, to being everywhere. And we—just like the rest of the world—are struggling to make sense of the change.

In this moment of confusion, People Nerds turned to its network to get some expert perspective. We spoke to 11 researchers with experience in the field of AI about how they’re seeing the research industry change thanks to this year’s sudden tech boom.

We discovered that this is a tense time for user researchers. They feel caught between their natural sense of curiosity and responsibility, and their organizations’ demands for fast production.

But we also discovered that this is a pivotal moment of opportunity for UX. We have the chance to have an unprecedented impact on the development of new technology, and cement our value to our organizations in doing so.

In this article, we’ll talk about three key tensions, and how experts are turning them into opportunities:

  1. Pace – Keeping up with tightening development cycles

  2. Focus – Being torn between usability and exploratory research

  3. Priorities – Being torn between growth and ethics

Jump to…


Upcoming event:

Discuss how to navigate and leverage GenAI with research leaders. Join us for Co-Lab Continued to learn how other research teams are using this technology, areas of opportunity, and things to watch out for when building new tools.


Context: The tension of new technology

Before diving into these three tensions, let’s look at the context for why any of them exist at all. To do so, let’s define some terms:

✔ What is traditional AI?

AI programs that make predictions and decisions about existing data patterns.

Examples: Spam filters, recommendation algorithms, and image recognition.

✔ What is Generative AI (or GenAI)?

A kind of AI that generates new content and translates data between formats.

Examples: Image generation, chatbots, music composition, and deep fakes.

✔ What are Large Language Models (LLMs)?

Text-based GenAI algorithms that are trained on large text datasets from the internet.

Examples: Content generation, translation, code writing, tools like ChatGPT.


While it feels like AI has come upon us all of a sudden, that’s not totally true. Traditional AI has been an integral part of our lives for over 10 years now—more so than many of us realize. In 2017, a PEGA survey found that 34% of respondents thought they directly experienced AI on a daily basis…but in reality, 84% did.

“I think over the years, there's always been some inherent level of AI in everything...When you listen to Pandora, you don't think to yourself, ‘Hey, I'm listening to an AI generated playlist.’ I'm listening to a smart playlist that Pandora is catering to my listening habits, right? That is inherently an AI-based model…That kind of machine learning has been present for years. You take it for granted.

Todd Hausman, Roblox

GenAI, however, has been purely the realm of science fiction until very recently.

The real breakthrough came in 2022 when tech startup OpenAI released their LLM, GPT-3.5, to the public via ChatGPT. On a similar timeline, they developed plugins and a developer API service that gave access to organizations to build their own tools.

Suddenly, advanced GenAI technology was a reality to everyone—users, builders, and researchers—all at the same time.

This doesn’t happen often. Everyone has a brand new technology, and we’re all scrambling to understand what to do with it.

The question is, how do we advocate for our curiosities and concerns, while still meeting the organizational desire for action?

The issue is, researchers and organizations approach “new” with very different mindsets.

Us researchers see GenAI as a series of unknowns, curiosities, and concerns. This is a brand new material and we don’t know nearly enough about it yet, let alone its implications being out in the world. We want to explore these questions thoroughly before feeling confident building something out of it.

Organizations don’t see it that way. What they see is OpenAI being the most successful startup in history, almost unseating some of the biggest giants in the industry within a matter of months. They see a powerful new tool for growth—and a serious threat about being left behind.

These two different mindsets are creating some serious tensions for researchers as they try to adapt to a new research world. The question is, how do we advocate for our curiosities and concerns, while still meeting the organizational desire for action?

In other words, how do we navigate through this storm? And what do we navigate towards?

Back to top

Pace: Dealing with speed

Organizations are racing to be the first to market with new GenAI tools. The stakes feel too high to lag behind.

“I think it's like we just have this existential threat that if we don't do it, somebody else will. I mean, it's out there now. Anybody can use it…it's just this fear that if you're not the mover, somebody else is going to be the mover, and that's bad for business.”

Anonymous UXR, Duolingo

Timelines are being dictated by how quickly product teams can start experimenting with functional prototypes, as well as how quickly other companies are producing new tools. Innovation is happening faster than anyone—researcher or builder—can keep up with.

With generative AI, What we're seeing is people want to move more towards that Iterative approach to pushing things out to customers faster…There isn't as much lead-up time for things. We have to be able to help meet their needs and come up with different quick turnaround studies.”

Jessa Anderson, ServiceNow

GenAI’s newness has also created a knowledge vacuum. People are rushing to understand not only the user experience, but the technology itself. Researchers are being asked about how GenAI works, and a lot of us just don’t know the answers.

"We've taught people at our company to look to us for answers, and gosh we don't have answers."

Kaari Peterson, Design & Research Leader

More to discover than ever before, and less time. Researchers are worried about being able to find new methods to effectively test GenAI and keep their practice rigorous and ethical in the meantime—all while being asked to move faster and faster. Our fear is that, if we don’t keep up, we’ll be left behind all together.

Navigating towards opportunity

“This is a trend that's going to happen. Either we can play a role in helping to guide and shape it and answer teams’ questions, or they're gonna get their answers in other ways.

Michael Winnick, dscout

The bad news is that we are unlikely to be able to throttle down these fast timelines. The GenAI horse race is in full swing, and trying to step in front of it or slow it down might be as useful as stepping in front of an actual horse race.

The good news is that, if we look past the stress, lots of demand is actually a great problem to have. We’ve been laying groundwork for years to get product and leadership to turn to research for answers, and this is an opportunity to deliver on that groundwork in a highly impactful moment.

More than that, as builders are working to figure out how their new material works. We now have the opportunity to partner directly with them in that discovery. If we make learning about AI a joint endeavor, we can build our relationships with builders closer than it's ever been.

All this hinges on being able to move quickly and authoritatively. We need to become authorities fast, and prove our value up front before anyone writes us off as irrelevant.

How to stay quick

✔ Adapt your methods to be more iterative

Break research down into bite-sized pieces as much as possible, and deliver as often as possible.

✔ Use AI tools to scale your practice*

Researchers are torn on this as a tool, but some of our experts have been using ChatGPT to help with some of the more tedious elements of the research process:

  • Transcription

  • Generating SPSS or Excel prompts to speed quantitative analysis

  • Brainstorming question ideas

  • Summarizing interview data

  • First drafts of survey or interview questions

*When implementing AI in your practice, please be careful with your company’s data and user PII. Read up on your company’s policy about data sharing and never give PII to a non-secure LLM.

How to stay authoritative

  • Watch the AI news in your field closely and keep teams updated on what you discover.

  • Take workshops or watch YouTube videos on how LLMs work.

  • Experiment with ChatGPT to see how it works, and see what it’s good at and bad at in your personal experience.

  • Experiment with new GenAI tools in your field to understand what UI choices are being made by others.

  • Attend engineering meetings and stand-ups to get a sense of how the technology works and what open questions they still have.

  • Be aware of the potential risks or ethical concerns behind building and utilizing GenAI models and bringing them up when relevant. Read more on this below.

  • Read up on other organizations’ previous research on human-centered AI to get ideas on what to study and how. Send your designers and product planners these resources, too! Consider sites like Google: AI Guidebook and Lennert Ziberski: “UX of AI”.

Back to top

Focus: Staying strategic

The next tense question is, what do we research when it comes to GenAI?

Builders’ focus is likely to be granular issues of performance and usability. They need to ship fast, and want to make sure what they’re building is functional and legible from a UI perspective.

They are also grappling with new questions of accuracy. GenAI is probabilistic technology—its output isn’t very predictable. So builders need to figure out how to measure and improve accuracy.

How do you truly validate these massive models?...We don't have a sophisticated way right now of analyzing that as a whole. They're creating these massive generative AI models that are supposed to be applied to everything, but we can't evaluate it on everything…We need a lot of humans to figure out how well it's doing.”

Kevin Johnson, dscout

But researchers we spoke to are thinking bigger than that. They’re curious about the implications of human-AI interaction, the strategic benefits and risks, and how their organizations can fit into that puzzle.

We’re on the brink of this new era. How do we figure out how to insert the human into the AI stuff? How's that going to work?...We've started talking more about human behavior in general.”

Kaari Peterson, Design & Research Leader

“I think, especially in the valley, a lot of times you have technology before you have a problem. And so I think my job is not just to find a problem. My job is to make sure that that problem is solved in the most elegant way possible.”

Todd Hausman, Roblox

This is an important question to answer, especially amidst the GenAI hype. It’s easy to fall into the belief that GenAI models are a “solve all” that can be broadly applied. But in reality, they are targeted tools that—like other tools—can succeed at some things and fail spectacularly at others.

Surveys show that the public also has nuanced opinions about where in their lives they’d welcome AI. Finding a home within this public opinion landscape requires a researcher’s point of view.

But not all leadership teams are on board with the researcher point of view. Some are more interested in getting any GenAI product to their customers, whether or not it fills a user need.

“[Businesses are] having this technocentric view of, ‘This is just an AI, just use it.’ This is what it is. Instead of really saying okay, what do people want from this? Is this what people want or is this just what an engineer wants?

Mohammad Tahaei, Responsible AI Researcher

Researchers are concerned that in the short term, organizations will forget or ignore the value of foundational research—either pushing research into purely evaluative realm or ignoring them entirely.

“I think we are finding that it’s not a given at all that we are an integral part of the process…[Right now] is a weird, acute version of that where people are like, ‘Wait, why do you need to be here again?’”

Anonymous UXR

Navigating towards opportunity

“When a new technology is introduced, there's this really rapid period of growth and change and we're right at the start of that curve for a little while. And the unknowns are very high at this time. Research can provide a lot of outsized value…where things are wide open and you can be wildly off or wildly on.”

Michael Winnick, dscout

Fortunately, history is on our side with this one. We know that when organizations skip foundational questions of user need, they end up with weak adoption and expensive technological bloat.

That means that researchers can actually provide a very high level of value. We can be strategic advisors to organizations and ultimately have ground-level influence on a nascent technology.

“I feel like as UX researchers, we have a really cool opportunity to be a part of creating interactions and experiences that will become pivotal to AI…Like whoever was the person to integrate search into the address bar who had that amazing idea. I feel like you're not gonna go to any sort of Web browser without expecting that ability to be there. And I think we have the opportunity to be a part of creating things like that for AI.

Jasmine Brown, UX Researcher

Some researchers’ organizations are acknowledging the power of research. For those research teams, the need for strategic-level insight is gaining them more organizational influence than ever before.

“[We are] In spaces that we have not previously been invited to or needed to be at…It's really elevated our role as UX researchers into something that's much more strategic mindful forward thinking—at a company level versus a product level or within our own research community.”

Katie Schmidt, Servicenow

To get there, we need to be persuasive about the power of foundational research. And in the meantime, we need to be efficient with our evaluative work, which still needs to happen to maintain our relationships with our product teams.

How to persuade on the importance of research

✔ Put yourself in the decision-maker room

Be courageous (or even pushy) about being in GenAI strategy meetings. Don’t wait for a polite invitation—ask to be there, and believe you will prove your worth once you’re in the room.

✔ Offer thought-provoking user stories and hypotheticals

Product developers might be overly focused on their “ideal” user experience and the benefits of their project, and overlooking strategic risks. Use your knowledge of your user base to ask, “Have you thought about this?” as often as possible.

✔ Resurface existing foundational research

Go back through your repositories to find old insights that might be relevant to the GenAI push. Bring them to your meetings with builders and leadership.

✔ Run low-lift exploratory research

Do this to get interesting tidbits and pique interest for bigger exploratory projects.

✔ Conduct secondary research

Don’t settle for just your primary focus. Also take a look at human behavior and GenAI.

How to get efficient with evaluation

  • Develop new templates for testing the usability and accuracy of GenAI products to streamline your process.

  • Democratize usability processes in your organization.

Back to top

Values: Being the “voice of the human”

In the face of GenAI as a revenue driver, researchers are feeling the heat turn up on a final historical tension: the conflicting priorities of fast growth versus ethical design.

UXRs have always represented the user, but now it feels like it’s their job to represent humanity itself.

"We were the people in the room who were representing the user, and now we're the people in the room representing the HUMAN, representing humanity. It's sort of elevated who we are representing now."

Kaari Peterson, Design & Research Leader

Ethical concerns run a huge range, from user-focused issues to back-end problems of data sourcing and training.

Concerns raised by UXRs include but aren’t limited to…

  • Transparency – When do users need to be informed of interaction with a machine?

  • Data collection – What guardrails do we put on how data is collected and stored re: LLMs? How do we communicate to the user what their data is being used to do?

  • Training ethics – How was the model trained? Whose labor was used?

  • Privacy – How are we keeping user data stored and safe? How long are we keeping it for?

  • Fairness and bias – Is this model biased? Is every user getting the same experience?

  • Human jobs – How will this automation affect people’s livelihoods, both out in the world and in our company?

And these are the known issues. It’s likely that we haven’t yet uncovered all the ethical consequences of this new tech.

“Right now when it is still nascent and still frankly dangerous technology in that we don't understand it—it is a black box…I don't necessarily think it's wise for us to proceed as if we have all the answers.”

Katie Johnson, Yohana

Doing the right thing feels urgent. But what is the “right thing”? The moral landscape is so complicated that researchers are having a hard time discerning what exactly are their responsibilities, and how to operationalize them in their organizations.

On top of that, product development is almost manically fast right now. Urging caution, or delivering negative feedback, puts researchers in danger of being cast as a “wet blanket”. It can get us ignored, hurt institutional relationships, and even threaten our roles.

“I feel like UXRs are constantly like, ‘You’re the naysayer, you’re the wet blanket,’ and you don’t want to be seen that way around this when there’s so much energy. But at the same time…there are ethical and real human costs to maximizing our efficiency and only thinking in terms of business profit in the short term.”

Meredith McDermott, Duolingo

Navigating towards opportunity

There's just so much right now that we should feel responsible for and should be able to adequately speak about in order to drive those ethical but excellent user experiences. So that's the biggest shift that I see and it's a very serious one.”

Katie Schmidt, ServiceNow

If you’re at a loss for how to advocate for ethical design in your organization, you’re not alone. When developing AI, it’s important that issues of responsible design, bias, unintended consequences, data privacy, transparency, and other considerations are part of the process—whether UXR takes the lead on that or not.

How to get clear on your values

✔ Establish what “ethical” means to you and your organization

Create guidelines that you can advocate for, rather than just telling people “no.” Other companies are starting to brianstorm guidelines, which you can use as a starting point:

✔ Operationalize your guidelines and socialize them aggressively

What does success look like from an ethical point of view? What do products need to have before they can ship? What can’t they have? Make sure your organization knows the answers to these questions.

How to advocate for ethical use of AI

✔ Make a business case for ethical design

Remind your leadership that lack of ethical consideration can lead to issues with brand integrity, and ultimately losing user trust.

✔ Make a user case for ethical design

Let the users speak for themselves. Run research about perception, expectations, fears or impacts to socialize in your organization. Qualitative data is one of your most powerful tools. Use dscout to source quotes and impactful video reels about the potential human impacts of AI.

✔ Find allies across the organization

There may be others across your org who are also feeling nervous about speaking out. Build a support network, even if it's informal, and keep each other in the loop about GenAI practices across teams.

✔ Get legal on your side

Nobody has more practice in being a wet blanket! GenAI has implications for data privacy and liability that makes practice and development a potential legal issue. Partner with your legal team to get extra firepower when you have to say “no” to something.

✔ Consider your involvement with ethics

Very few of us are trained ethicists, though our community does have plenty of people who are (Google PAIR, UX of AI, HmntyCntrd to name a few). First though, be intentional about how much your company needs you to play this role, and how comfortable you are doing so.

If you do want to play that role and don’t feel prepared, use stipends or programs at your work to train your team on showing up as ethicists. If you don’t, identify the people who will (engineering, legal, marketing, product) or advocate for a formal ethics role to be added to your organization.

Back to top

Conclusion: Navigating towards something new

Pace, focus, and values: these are all friction points that have existed in our field before. But this new technology is making those friction points heat up. If you’re feeling pulled to move fast, keep your head down, or even to compromise some of your values, you are not alone.

However, if there’s another takeaway here, it’s that this tension is not only a bad thing. Our instinct around “new”—that of curiosity, strategy, and some skepticism—is at odds with some of our organizations, but that’s a good thing. I’m probably biased, but I think our point of view is desperately needed in this pivotal moment of technological innovation, even if it causes some agitation for both us and our organizations.

There is also an opportunity for us as a field, not just to survive through this storm, but to thrive. If we can successfully advocate for our sense of “new”, it will positively impact our products and our users. It will also give us the chance to establish ourselves as experts, strategic forces, and ethical thought leaders. We can prove our value and elevate our practice. And we can be ground-level influences on technologies that will be with us for a long time.

To do this, we must have confidence in our “new.” If we focus only on the tensions and difficulties, we put ourselves in a reactive, defensive position. We may navigate through or away from stressors, but we will have no direction of our own.

But if we have confidence in our vision on how GenAI products should be developed, we have something to navigate towards. We can advocate for a point of view where research works closely with builders, where we are included in product strategy, and where the user is honored through a sense of ethics in addition to usability.

If you’re feeling tension in this new AI storm, you’re not alone—but you’re also not alone in building new solutions. Use these resources and thought starters. Let us know on the People Nerds Slack channel what challenges and opportunities you think GenAI offers UXR.

Together, we can all figure out how to navigate towards a “new” that feels right.

Back to top

See how leading human-centric organizations are approaching AI

Join us for Co-Lab Continued, four days of UXR’s most pressing conversations.

Day three will feature six lightning talks presented by leaders from Amazon Web Services, Dulingo, Yohana, and more. The group will cover topics including:

  • GenAI and ML basics for UX and insights

  • Reflections on designing relationship-first products with and for users

  • Why GPT needs UXR

Plus, three other days of tying research to ROI, building a flexible team, and so much more!

Learn more about the event here, or secure your spot below.

Karen is a researcher at dscout. She has a master’s degree in linguistics and loves learning about how people communicate with each other. Her specialty is in gender representation in children’s media, and she’ll talk your ear off about Disney Princesses if given half the chance.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest