Skip to content
Ideas

How Will Developments in AI Impact UXR? Researchers Weigh In

New tools like ChatGPT will inevitably have an impact on user research. But what that will look like is up for debate.

Words by Kathleen Asjes, Visuals by Allison Corr

Initially I thought it was a fad when I started seeing reports on “amazing advancements” in generative AI. Not long after, I grew tired of the avalanche of ChatGPT related posts on LinkedIn and ignored it all together. But it soon became clear this fad was not going away. And when someone from my co-working group claimed that AI will replace his job in the future, I knew I had to explore it more.

I wondered what impact this technology could have on my research practice. Should I believe the pessimists who think we will all be out of work in a couple of years? Or is AI the ideal research partner we have all been waiting for?

This is why I reached out to my network and asked them: what are use cases for generative AI in user research?

Before I share researchers' thoughts, let's set some definitions.

Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by non-human animals or by humans.

Generative AI (GenAI) is a type of Artificial Intelligence that can create a wide variety of data, such as images, videos, audio, text, and 3D models. Well-known GenAI examples are ChatGPT, DALL-E and MidJourney.

A large language model (LLM) is a type of artificial intelligence (AI) algorithm that uses deep learning techniques and massively large data sets to understand, summarize, generate and predict new content. The most well-known LLM being ChatGPT.

First, what do we think about AI?

Throughout April, I asked every user researcher I encountered if they were experimenting with generative AI and where they see potential benefits.

I talked with former colleagues, clients, friends, and senior researchers and research leaders at the Learners UXR leadership summit in Toronto where this topic was widely discussed. After 30+ conversations I noticed a couple of recurring themes:

Our hopes and beliefs

  • Partnering with generative AI will speed up our work, allowing us to spend more time ensuring that our findings and insights have impact.
  • Those who do not embrace this technology as a valuable research resource/partner will be bypassed by those who do in terms of productivity.
  • With our knowledge about the needs of human beings as well as ethics and privacy, we have a big role to play in this development.

Our fears

  • How well equipped are we to detect—and deal with—bias in generative AI?
  • What can we do to safeguard privacy and ethics?
  • What does this mean for the future of user research, which parts of our processes become obsolete and what are potential implications for our careers?

I know it might sound a bit silly to be afraid of AI, but I think we have every right to be at least a bit concerned. Making sense of human emotions is our bread and butter! And we—user researchers—all know that synthetic users are no substitute for interviewing real users.

But do the people who usually hire us know this as well?

How does AI fit into our research practice?

In my discussions I noticed that depending on risk appetite and personal situation, researchers choose different approaches towards incorporating generative AI.

Let’s go over three different use cases, and the thinking behind them.

Use case 1: Untempted and unconcerned

Some researchers I spoke with were neither tempted nor concerned about AI. Sarah, the principal researcher of a small consulting firm I spoke with, said she was certain they would continue working like before.

“I am sure this might enable some companies to do fast and easy research themselves, but the companies who find this attractive are not the clients I seek to work with.”

I thought it was quite refreshing to hear someone so confident about this and wanted to know more. Did she not see this as a way to speed up their work or increase capacity to serve more clients? But no, at this point in time she saw no reasons to start incorporating AI in their practice.

“The types of challenges we work on are complicated and with this human complexity in mind I am confident our abilities outperform AI tools. Our clients seek to understand humans. Machines won't do this for us.”

Use case 2: Explorative experiments

The majority of researchers I spoke with were exploring the benefits AI could bring to their practice. The reasoning behind their choices was intriguing. Several non-native English speakers shared that they use ChatGPT to improve their writing.

It helped them to compose screeners and research summaries with ease, and made them more confident about their ability to communicate with stakeholders. Less time spent on crafting their messages meant more time for the actual research.

Others were incorporating generative AI as a type of sparring partner. As Lauren, a senior user researcher in the UK health sector shared with me:

“I very recently started using it (generative AI) for scoping and preparation. For example, I have asked for summaries of key product features across several competitors—a manual and time consuming task for me. I have also drafted surveys and interview scripts, then asked ChatGPT to suggest questions based on my research objectives, which I compare against what I already have, and pinch a good question I might have missed.”

In this specific case, Lauren is a UXR team of one. She doesn’t have research colleagues to bounce ideas off of or review things with. ChatGPT somewhat fills that void for her. But, when I asked her what else she wanted to explore, she was adamant she wouldn't extend her use of ChatGPT beyond the preparation phase of research.

You might assume this was related to concerns about privacy or bias, but no.

“I personally get a lot of value out of sitting with the data and analyzing it myself. I can draw connections or spot trends that might play into prior insight in a way that ChatGPT won't be able to do.”

For her, the period of reflection is where insights are born.

This really resonated with me. I also enjoy the phase of analysis and synthesis, I would almost say it's the best part of my job. Going through the data increases my awareness of its richness, taking in all the information—even what might not be relevant now—allowing me to use it with stakeholders when working on whatever is coming up next.

Using a tool that makes me more efficient and impactful is of course amazing, but what I don’t want to happen is that the critical thinking and creativity required for this work fades away. I love digging through data, doing the analysis and sitting with it until the deeper insights emerge.

Use case 3: Uncurbed enthusiasm

Only a couple of researchers I spoke with shared that they were consulting ChatGPT for every aspect of their research process—from desk research to recruitment prompts and even analysis. For Ben, an independent research consultant, the main argument to do so was speed.

“It just saves me so much time, it would be foolish not to use ChatGPT. Tasks that would usually take me a full day, like generating a good moderation guide, are now done in less than two hours. I still ‘work’ on it and don’t accept everything it feeds me, but it is so much easier to get started.”

As a fellow independent consultant, I totally understand the attraction of speed. Every new client requires you to get up-to-speed in a new domain. Why not include some artificial support to get there faster?

When sharing my concerns about data privacy and bias, these researchers were quick to state that the time they saved was spent on fine-tuning and checking their outcomes. If you collect the data yourself (through moderated interviews for example), you may have enough of a grasp on the raw data to be able to spot what might be missing in the outcomes. However, this doesn’t solve the privacy issue. It’s hard to oversee the consequences of feeding sensitive data to a tool like ChatGPT.

For me personally, doing analysis with AI tools feels like cutting corners. I am not sure that we do ourselves—or our practice—a favor by increasing the distance from the people we’re trying to understand in a deep and meaningful way.

The price of speed

The argument that kept coming up during these conversations was that generative AI allows us to work more efficiently. This is, of course, attractive: extra time could allow us to scale our research efforts and potentially even increase our impact.

Like any researcher out there, this sounds like music to my ears. Doing more with less? Yes, please! Speeding up our work is appealing, but let’s spend a minute reflecting on potential harm.

Intelligence without understanding

When you understand how Large Language Models like ChatGPT are trained, you understand that they’re not as intelligent as we think they are. LLMs are trained to answer questions based on probability. The conversational nature of the interaction makes the answer feel smart and meaningful, but at the same time the tool is not really “thinking” about the answer. This is well explained in Christopher Roosen’s write-up, who elaborates on concepts from a paper by Kyle Mahowald and Anna A. Ivanova, et al.

As he puts it, “They’re full of all the information in the world, without any sense of being in the world that lets them evaluate the quality of the information and their own production.”

It is also important to understand what type of data ChatGPT uses as the basis for the tasks you ask it to perform. This is something Jason Godesky describes in his article: ChatGPT cannot do user research When you ask ChatGPT to write a persona, its response is not based on careful analysis of a rich dataset that you’ll never have access to; it’s based on forum posts and blogs.”

The posts and blogs scraped to train LLMs are not carefully curated and most likely not something you would have used if you were doing your own research. By asking a GenAI tool to perform research tasks, you basically invite garbage data into your process without knowing its actual source or validity.

Concerns of bias

Diving more into how LLMs are trained means that we have to understand the training data.

AI models and tools are based on historical data available online. I think Lisa Dance nails it when she states that Historical Data = Historical Problems on Blast. Using historical data to generate decisions or services means that we include everything we didn’t get right in past things we are producing for the future.

Especially researchers know very well that underrepresented voices and ideas are the ones we need to include in research. We’re trained on spotting bias and understand that it sneaks into our work easily unless we guard ourselves against it. Do we still have that capacity to spot bias if we outsource (some of) our work to tools that have potentially biased datasets as their foundation?

For me it was one thing to “know” that something is not quite right, but another thing to “see” it. This particular map stuck with me while reading up on the training of LLMs.

Source: https://2022.internethealthreport.org/facts/

This map shows where the benchmark datasets used for training come from. More than 60% of all datasets come from the United States. For me, a researcher based in Europe, this is crucial to understand and take into account. Of course some of the data is to train in English language understanding, but what about all the cultural nuances that we need to produce meaningful and unbiased work?

On the flipside, some people claim that generative AI will help us to reduce human bias. The founder of Ween.ai (one of many new AI tools for analysis of qualitative research) told me that the outcomes of analysis by their AI are more balanced than what researchers create themselves.

According to her, taking the “human” element out of analysis is one of the major benefits of using generative AI. She was genuinely surprised when I shared that I enjoy the analysis phase, and until then only encountered beta users who were happy to shave days off their research cycles.

New tool or new partner?

Savina Hawkins, a researcher who worked with AI for years, sees generative AI as a valuable research partner. Not just because of increased efficiency, but also for the sake of having a smart and creative sparring partner to work with. And when you review all the different tools available, there seems to be a “partner” for almost everything we do in research. Is the last frontier interviews?

This made me think of what I would like my new, intelligent, research partner to work on. I have too much experience to be excited about support in preparation or analysis of my research. But, the one thing I have not found a great solution for is research repositories. So far I have not come across any company with a fully functioning repository, unless a full time ResearchOps or research librarian was employed.

My dream is a solution for the following situation:

You start a new job and during the first couple of weeks your new colleagues enthusiastically tell you about previously published research reports. One person shares a Google Drive with content, the next person points towards, a ton of interesting databases in a different space, as well as great insights shared via 2-3 Slack channels. That includes some amazing (but scattered) insights library on the company intranet. You basically inherited a mess, and have to spend a lot of time on making sense of all this data to avoid reinventing the wheel.

What if you had a tool that could combine all this data and allow you to query all these sources with your research questions? A tool that would show you the way and uncover new insights that were not apparent before? This is the kind of research partner I would like to have access to!

Where do we go from here?

Whether you use generative AI in your research practice or not, it remains crucial that we gain knowledge about this technology. Not just for “staying up-to-date” with trends or ensuring that we grasp how the lines between humanity and technology are being challenged. The reality is that many of the products and services we do user research on will incorporate this technology. If it’s not already in your product manager’s backlog, it will be soon.

What I have heard during my conversations with research leaders is that we’re not shying away from this. On the contrary, we even welcome a role in shaping AI technology by including our knowledge about humans and ethics. I spoke with Ali Maaxa (Research strategist at Maaxalabs & Founder of the AI Governance Group) about this. She is firm about the need of including user research in the further development of generative AI.


“Now is the time for UX researchers to transpose the way we work from largely design-oriented research to partnering with engineering and product strategy.


"We can ensure that the immense risk that comes with creating highly technical tooling such as AI is measured with the questions of what makes us human, what makes various kinds of work ‘good,’ where to preserve the agency and imagination in this world of speed and scale that drive UX research.


“We need to work with engineers, educate them about what we bring to the table, and partner with them to balance invention with understanding. I think we have an opportunity to reunite (UX) research with R&D in industry right now.”

The sense of urgency that comes from what Ali shared with me is astute. This is not a time to wait and see what happens. It is time to lean in! We have what it takes to build more human, more helpful, and more ethical AI + ML systems.

After this exploration, it’s clear to me that there are benefits to including generative AI in our practice. The promise of it allowing us to work more efficiently is especially appealing. Having a knowledgeable “sparring partner” that you can ask any question or task, without it feeling like you are bothering anyone is also a big bonus.

Soon there will be a mirage of new tools at our disposal, all claiming to make our lives easier. Where do we even start with evaluating these options? We need to educate ourselves to be able to make the right choices.

At the same time, I have to acknowledge that most of what I learn comes from interacting with other humans. Talking to peers in the industry is what helps me grow as a professional and human being. This is why I started these conversations about AI with other researchers. I wanted to learn more, and now look forward to seeing how user research will play a role in shaping this technology.

Last but not least, I am still left with one burning question: What will my job look like five to ten years from now? Only time will tell how this pans out for us in user research, but to end on a positive note I will have to share ChatGPT’s own recommendation on this topic:

It's important to note that while AI can be useful in several stages of the user research process, it should not replace human judgment and expertise entirely. Instead, AI should be used as a tool to complement and enhance human decision-making, enabling researchers to work more efficiently and effectively.

Kathleen is an independent research and insights leader, passionate about empowering growth for people and businesses. She enjoys building research capacity from scratch and has spent these last 15 years leading teams of researchers, designers and product managers on their journey towards working insight-informed. 


She is the founder of a new peer-to-peer coaching network for research leaders, called 'Grow & Connect'.
After growing up in the Netherlands she started to explore the world. Kathleen has lived in South Korea, Australia, Sweden and is currently residing in France. Outside work, Kathleen enjoys rowing on the Seine, running after her three gremlins and probably should be spending more time trying to learn French.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest