May 13, 2026


May 13, 2026


AI-generated research participants are having a moment. As synthetic users have become more sophisticated, the temptation to replace real humans with faster, cheaper, always-available stand-ins has grown.
And honestly? Sometimes that's perfectly reasonable.
But sometimes…it's a really bad idea, and the stakes are high enough that you should know the difference before you run that "study." While synthetic participants can serve as a gut check for lower-stakes projects, trusting AI-generated users for big decisions could jeopardize your products.
So let's break it down.
Synthetic participants are AI-generated personas or simulations designed to approximate how real people might respond to a research stimulus—whether that's a survey, an interview script, a prototype, or a concept test.
They're built from existing data: user interviews, behavioral analytics, demographic profiles, and prior research (ideally, high-quality and diverse versions of this data).
A well-constructed synthetic participant might…
The pitch for using them is undoubtedly compelling: faster cycles, lower cost, no scheduling nightmares, no recruiting lag. Run a study at 2 a.m. on a Saturday? Sure. Test 50 variations in a day? Why not? The flexibility is tough to beat.
Used cautiously, synthetic participants can genuinely speed up certain parts of the research process. A few that come to mind…
Before you recruit real people, consider running your screener, interview guide, or survey through a synthetic participant. This can surface confusing questions, broken logic, or glaring gaps. It's not a replacement for a real pilot, but it's faster than waiting for one.
Want to understand how your product might theoretically behave for an 85-year-old with low tech literacy in a rural area? Synthetics can help you generate hypotheses about underrepresented groups, as long as you treat those outputs as hypotheses—not findings.
If you need to draft example responses to test how moderators handle difficult participants, synthetics may be of use. They're a creative tool, not an evidence tool.
When you want to poke holes in a strategy or stress-test a concept before spending real research budget, synthetic participants can help you identify the questions worth asking (not answer them).
Here's the thing about synthetic participants: they are trained on existing data. That means they reflect patterns from the past—not the messy, surprising, contradictory reality of people in the present.
This can be a huge hindrance considering how fast conditions and people are changing. There is an ongoing need to know how users actually feel in the moment.
A few areas in particular where synthetic participants really fall short…
Real people pause, contradict themselves, get frustrated, and feel things. Synthetic participants don't. They produce plausible-sounding responses, but plausible isn't the same as true. And in user research, the unexpected emotional response is often where the most important signal lives.
If your target audience isn't well-represented in training data (niche communities, people with rare conditions, demographics historically excluded from mainstream research) synthetic participants will confidently make things up about them. And that's not just unhelpful, it can reinforce existing blind spots or biases.
Watching someone actually use your product in their real environment is irreplaceable. A synthetic participant can tell you what they think they'd do, but what people actually do is often completely different.
Even with real users, people can’t accurately predict what they’ll do, so it’s critical to actually see the real behavior in the moment.
Synthetics are pattern-matchers. If you're asking them to respond to something genuinely new—like a paradigm-shifting product or a concept that doesn't fit existing mental models—they'll map it to the closest thing they've seen. You'll get plausible garbage. You’ve probably come across this before with normal AI prompting!
As briefly mentioned above, if the research is informing a product launch, a major messaging pivot, a healthcare decision, or anything where being wrong has real consequences, you need real data from real humans. Full stop.
You don’t want to rely on what synthetic users are “assuming” for critical decisions.
Beyond quality, there are a few risks that don't get talked about enough.
This one's sneaky because synthetic research does look like research.
It has sample sizes, quotes, and themes, so it's easy to mistake the appearance of rigor for rigor itself. Teams have shipped products based on synthetic "insights" that had no grounding in reality. The harm isn't always visible until it's too late.
If the synthetic users’ training data was biased (and it was, because all training data is) they will reproduce those biases. But they'll reproduce them smoothly and confidently, without the signal that something's off.
Generating synthetic responses "as people” from specific communities without their input isn't neutral. It can produce misleading representations, and in sensitive contexts like health, identity, and lived experience, it can cause real harm.
Research builds credibility with stakeholders when it's grounded in real human experiences. If synthetic research is ever exposed (and it tends to be), it can damage trust in your entire research practice—not just that one study.
Before reaching for synthetics, take a moment to ask whether any of these approaches could serve you better. Many will get you results in the timeframe you need—without all the added risk.
Platforms like Dscout make it possible to run fast, lightweight research with real people in hours (or even minutes!). Five to eight participants doing a quick diary entry or a media survey can usually give you more signal than 50 synthetic responses and still be inexpensive.
Existing data like customer support tickets, app reviews, past interview transcripts, past research, and behavioral analytics often contain more untapped signal than teams realize. That's real human data you already have!
When you need a fast directional answer, consider talking to people inside your org who interact with customers daily (support, sales, CX). This can be just as quick and effective as synthetics—possibly even quicker—but still grounded in real human experience.
Dscout's diary and intercept methods are built for capturing feedback from people in the moment, without the overhead of traditional qualitative studies. If speed is your concern, this solution provides speed plus better quality signals than synthetic shortcuts.
We’ve gone through what exactly synthetic participants are, what they’re good/bad at, and the risks to consider when using them.
So when does it actually make sense to use synthetic participants?
If you’re still not sure, run your project through the decision tree below. It won't make the call for you, but it will help you ask better questions before you commit to a method.

At Dscout, we believe research that doesn't connect to real human experience isn't really research—it's modeling. And models are only as good as the assumptions and data baked into them.
Synthetic tools have a role to play in the researcher's toolkit. But so does knowing when to put the tool down.