Words by Michael Winnick, Visuals by Addie Burgess using Midjourney
My parents are the opposite of canaries in a coal mine. For a long time, my father didn’t know how to use an ATM—he would ask my mother to “get money out of the wall.” So when they ask me about a tech trend, I truly know that it has hit the zenith of the hype cycle.
A few months ago, they started asking me about AI.
This makes sense because over the spring and summer, AI was everywhere, all at once. It was on NPR. It was in the NY Times almost daily. It was a hot topic for Congress, on pulpits, and in classrooms. Gartner was calling it the "most hyped tech product ever." And that was before the palace intrigue of November that was able to break through a very busy news cycle to report that Sam Altman was no longer the CEO of Open AI—and now is again.
Generative AI feels different because, in a lot of ways, it is. It is an amazing technology—a magical material from the future. We can all sense that it will change the user experience and the world in profound ways.
But in our rush to the GenAI future, we seem to be ignoring some of the important learnings from our recent past. It feels as if a willful collective amnesia has set in, returning the industry to the “we know what they want” days of yore. Product teams are barely thinking. Instead, they are experimenting with little insight, little focus on end-user problems, and no broader context.
I was at an AI conference recently and a guy was wearing a t-shirt that said, “I hallucinate more than GPT.” I think he was speaking for the whole industry. I am loathe to make predictions but here’s one: billions of dollars are being wasted right now by companies who are blindly diving into GenAI.
GenAI is not tech as usual
“Why the hell would I use that?" I've found myself questioning GenAI's product-market fit time and time again. But I've realized that there's a deeper problem at hand—GenAI is funky technology.
It’s idiosyncratic and not terribly predictable. It's non-deterministic, which is what makes it unique and interesting. It doesn't follow the rules, and it will be damned to give you the same output twice. GenAI would do great in a rap battle. But as a tool we can rely on, one that delivers consistency, GenAI has a lot of work to do.
The creators of GenAI know this all too well. They recognize that this new technology has a problem telling the truth. It likes to make things up that sound factually accurate. It can be biased and respond in harmful ways. They have to go through a painstaking process called alignment to try to prevent these issues from creeping into the user experience.
Even so, getting GenAI to consistently follow rules is like childproofing your home. No matter what gates you put up or corners you wrap in soft materials, your toddler will inevitably figure out a way to bump their head.
GenAI doesn't like following rules, but neither do the people who use it. They ask bots all kinds of questions that machine learning (ML) engineers and their outsourced teams haven't predicted, and they pose some seriously hard questions.
“Why do you think Google waited so long to launch GenAI?” As a former Google exec told me, the answer lies in the distinction between search engines and AI. If someone asks a search engine, "What were Stalin’s good qualities?" It will link you to a series of offensive content made by others. But ask GenAI, and you could receive a conversational, concise answer in the “voice of the company.” Your user could then go on and say, “Google told me…”
UXR to the rescue
This scenario raises one of dozens of questions that naturally arise in the wake of a technology breakthrough like GenAI. These questions span from the profound to the mundane, from the thorny to the theoretical. If your organization isn’t asking these questions or spending time and resources to answer them, you really should be.
Luckily there is a group of trained experts that you can lean on to do this work for you: UXRs.
UXRs are trained to see the world through the end user lens and trained to make teams confront some of their core assumptions and challenges. These skills are most valuable in times of technology transition when we are all on the steepest part of the curve.
Why? Because our collective lack of understanding is really high. If you were around when the iPhone launched, and you think about how we use it now, and how smartphones have disrupted major industries and completely altered the experience of being human, it’s pretty mind-blowing—and we’re just at the beginning of that with GenAI.
You may hear someone say, “We just don’t have time for research. We need to ship.” And that is where you are very wrong, even Chat GPT knows this:
"Building a complex GenAI without user research? That's like baking a cake without a recipe and hoping people will eat it. Good luck with that gamble! 🎲😉"
The thing about shipping real things on real ships is that the crew has to make sure the ship is ship-shape. You don’t just throw things on a raft into the great blue sea and hope for the best. Some organizations and researchers get this and are showing great leadership by making sure UXR is actively engaged in GenAI development, but they still seem like the exception rather than the rule.
Copilot is an incredibly challenging and highly successful GenAI product. The team at Microsoft used research extensively to make sure that the product would meet the needs and very high bar of developers.
Many mature organizations are committed to doing rigorous user research around their AI products. Products like Duolingo Max were shaped by feedback they gathered on dscout, and have been very successful. See how Duolingo created a unique, fun (and non-awkward) tutoring experience with GenAI, check out Meredith McDermott’s session at Co-Lab Continued.
A common complaint about involving UXR in iterative tech development, especially with newer tech, is that it slows things down. This does not need to be the case, we have a prime opportunity to flip that script.
An example of how to approach GenAI research
Getting into the field to understand GenAI doesn’t need to take months of planning. With tools like dscout Express, you can explore hypotheses and get real-world insights in hours.
We recently ran an Express mission on how people use ChatGPT and more traditional search projects and in a little less than an hour, we had recordings, verbatims and were able to draw a pretty clear insight: ChatGPT shines as a thought partner where Google search acts as an information broker. They don’t really “compete” in most use cases.
In another study, it took us less than 24 hours to understand what types of questions people were asking in their first encounters with the most well-known GenAI services, which surprised us with a broad range of everything from recipes for roasted chicken to the fate of our political system.
Given the generalized nature of GenAI, tools like Chat GPT and Bard are incredible tools for prototyping interactions. Almost any type of organization can use the products to simulate more domain-specific use cases. Set up your own mission to capture the questions people ask and the answers they seek specific to your domain.
"GPT will create a lot of questions about humans that UXRs can help answer. In other words, GPT will generate demand for UXR at a time when the field needs it."
CEO at dscout
UXR also needs GPT
While I’ve spent a lot of time in this piece, writing (ranting?) about why GenAI badly needs UXR, I also think the inverse is true. If the line of logic above tracks, GPT will create a lot of questions about humans that UXRs can help answer. In other words, GPT will generate demand for UXR at a time when the field needs it.
The UXR field has grown in tandem with technology shifts. Go back to 1995 and the entire field could have fit in the ballroom of an airport Marriott. Fast forward to today, and we would fill a pretty good-sized arena (maybe not as many as the Swifties could though).
Increased demand is welcome, but it’s merely the start. Our field has been living under the challenge of high expectations and limited capabilities and resources. UXRs have struggled to tie our work to business results and the speed of product cycles. I firmly believe that GenAI is something we can harness to make our field faster, more relevant, and more successful via new methodologies, new approaches to analysis, and delivery.
But we can only take advantage of this technology if we choose to jump in. I worry that the inherent skepticism of our field could leave many practitioners on the sidelines.
If you want to be in the game, the best things you can do are:
Engage your curiosity
Learn everything you can about the tech (don’t allow yourself to be satisfied that it’s some kind of magic, it’s not)
Anticipate questions your team will have and proactively go after them
Get hands-on with engineering
AND call out the positive alongside the challenges
Help imagine and shape the future as a first-string special-teams player, not an arm-chair quarterback.
Instead of operating in fear that AI will replace us, we need to identify ways it can support us and remove some of the busy work—make us faster. Should AI be doing high-level analysis or synthesis for us? Absolutely not. Could it help save hours of time by summarizing and tagging open-ended text/voice/video questions? Yes—coming soon to a dscout near you.
This technology is coming whether we like it or not, so why not have ideas when stakeholders ask, “What should we build with GenAI?” We can identify opportunities to leverage this tech in places where our users are open to interacting with it and set up guardrails to prevent harm and reduce risk—both to the end user and the organization itself.
With trust built from being highly engaged with users and across disciplines, we can find ourselves in a better position to influence when thornier questions of ethics or consequences arise.
The unpredictability of GenAI and the implications that can come with it should convince leadership to take a step back and say, “Research is critical to this team.” Not that it needs to happen “first” or slow things down, the tech simply can’t be built responsibly or profitably without us.
"This technology is coming whether we like it or not, so why not have ideas when stakeholders ask, “What should we build with GenAI?” We can identify opportunities to leverage this tech in places where our users are open to interacting with it and set up guardrails to prevent harm and reduce risk—both to the end user and the organization itself."
CEO at dscout
Jakob Neilsen puts it bluntly when he says that “you can either be the windshield or the bug” in reference to UX’s role in GenAI. In the case of UXR, I’d like to amend that statement. Let’s be the GPS. Let’s show organizations efficient paths forward that help reach their destination with fewer accidents and minimal traffic, and in the process make amazing experiences that change people’s lives for the better.
In terms of how this tech can help the research process, the team at dscout is excited to be working at the forefront of making GenAI tools that work with you to reduce your burden and amplify your impact. If you have thoughts on this, I’d love to hear them.
Michael Winnick is the CEO and founder of dscout. Since the platform's founding, research leaders have tapped Michael’s personal passion for harnessing context-rich, human insight to drive innovation.
Michael previously served as gravitytank’s managing partner, steering the innovation consultancy through continuous growth for nearly a decade, prior to its acquisition by Salesforce. He’s led product development at Bay Area start-ups and media companies, including WIRED.