Synthetic Users and Digital Clones: A UX Researcher’s Honest Take

2 April 2026 - Emma Kirk

hands on a computer keyboard with images of virtual clones appearing

There’s a conversation happening across our industry right now that we think deserves a clear-eyed, honest response – not a dismissive one.

Synthetic users. AI personas. Digital clones. If you work in UX research, product development, or insight-led design, you’ll have encountered these terms by now. And if you’re in market research, you may well already be using them.

The question isn’t whether these tools exist or whether they have value. They do, on both counts. The real question – one we’ve been exploring carefully at User Vision – is what they can and cannot do, and where the line sits between useful augmentation and misplaced confidence.

First, let’s be clear about what we’re actually talking about

The terms “synthetic users” and “digital clones” are often used interchangeably, but they describe meaningfully different things.

Synthetic users are AI-generated personas built from large datasets of behavioural patterns, demographics, and interaction data. They aren’t modelled after any specific real person – they’re statistical composites designed to represent how a user type might respond. Think of them as interactive, queryable personas: you can ask them how they’d react to a feature, stress-test an interview guide against them, or explore a problem space before committing to primary research.

Digital clones (or digital twins) go a step further – they attempt to replicate the behaviour of a specific individual or closely defined segment, drawing on real-world data about that person or group. The ambition is a faithful, dynamic replica rather than a generalised simulation. As a result, they come with heavier governance requirements, greater data privacy considerations, and significant limitations when scenarios move beyond the original training data.

Understanding this distinction matters. The appropriate use cases, the fidelity of outputs, and the ethical considerations are quite different for each.

The UX community’s view: cautious, not dismissive

The debate within our professional community has been lively and, at times, pointed. Nielsen Norman Group put it plainly: UX without real-user research isn’t UX. The ACM’s Interactions Journal(this will open in a new window), writing as recently as January 2026, acknowledged these tools offer genuine advantages in speed, cost efficiency, and rapid iteration – but flagged real limitations in replicating the emotional complexity of genuine human behaviour.

The concern isn’t theoretical. In comparative studies, synthetic users have shown a troubling tendency toward what researchers call “people-pleasing” – validating concepts without the critical friction that real participants bring. When asked about online course completion in one study, synthetic users claimed perfect completion rates. Real participants told far messier, more instructive stories about competing priorities and shifting circumstances. That gap – between the tidy answer and the lived truth – is often exactly where the most valuable design insight lives.

The Interaction Design Foundation’s analysis(this will open in a new window) found that critical nuances vanished when synthetic approaches were used for concept testing. The outputs felt plausible. They just weren’t accurate.

The market research world tells a different story – it’s worth listening to

Here’s where intellectual honesty requires us to hold two things in tension.

The UX community’s scepticism is well-founded. And yet, synthetic approaches are delivering real, demonstrable value in the broader research ecosystem.

At the MRS Annual Conference early in 2026, Ipsos presented work using AI personas to simulate undecided voters during the 2024 general election – revealing strategic insights that traditional methods struggled to surface in the available timeframe. C Space and Sage have shared early findings from synthetic respondent trials built on real community data, designed to scale empathy rather than replace it. Industry data from the GRIT report shows that research teams using synthetic data report high satisfaction with results.

In well-documented categories – established user behaviours, known segments, validated UX heuristics – synthetic approaches are proving nearly as accurate as real-participant methods, and dramatically faster. For hard-to-reach audiences (specialist professionals, niche demographics, users in markets where recruitment is genuinely difficult), they offer a meaningful practical advantage.

The 2025 state of synthetic research analysis noted that two distinct tracks are emerging: behavioural simulation for UI/UX and product testing, and conversational AI personas for qualitative insight and message testing. They serve different purposes and require different evaluation criteria.

Where synthetic approaches genuinely add value in UX research

At User Vision, we see a clear and legitimate role for synthetic methods as pre-research and augmentation tools – not as replacements for human-centred research.

  • Hypothesis generation. Before committing to primary research, synthetic users can help teams explore a problem space quickly, surface assumptions worth testing, and generate questions they hadn’t thought to ask.
  • Proto-persona creation. Rapidly assembling initial profile frameworks for workshops, which are then validated and refined through real-world research, saves time without compromising rigour.
  • Pilot testing. Stress-testing interview guides, survey flows, and task instructions against a synthetic persona before deploying to real participants catches structural problems early.
  • Established heuristic validation. For well-understood usability principles – clarity, cognitive load, navigation logic – synthetic feedback can provide a useful early signal.

These are genuinely valuable applications. The speed and cost benefits are real. And for organisations under resource pressure, even imperfect early insight is better than none – provided it’s recognised as a starting point, not a conclusion.

Where the limits are – and why they matter most to us

The challenge for a research-first consultancy like User Vision is that the areas where synthetic approaches fall short are precisely the areas where our work delivers the most value.

Real users bring their whole lives into a research session. They are immune to nothing – not the bad day, not the poor signal, not the child asking for attention in the background, not the cognitive load of three other tasks running in parallel. These are not inconveniences around the edges of user behaviour; they are often the substance of it. They reveal the actual barriers to a successful user journey that no controlled or simulated environment can anticipate.

A participant who abandons a task mid-flow because they were distracted, stressed, or simply couldn’t read the screen in bright sunlight is telling you something essential about real-world use. A synthetic user, operating in a frictionless simulation of human context, will follow the most logical path to completion every time – and in doing so, miss precisely the insight that matters.

ACM’s analysis captured it precisely: synthetic users, constrained by training data, tend to follow the most logical or common paths. The valuable “surprises” – the interactions that reveal the most profound design flaws and opportunities – are exactly what they miss.

They also cannot uncover what doesn’t yet exist in the data. Novel behaviours, emerging needs, the cultural and contextual shifts that change how people relate to a product or service – these require real observation of real people.

And there is a specific risk we’d flag for any organisation working at scale: findings from synthetic research are hypotheses. They are not validated insights. Organisations that treat them as the latter – especially in low UX-maturity environments where synthetic tools might seem like a welcome shortcut – risk compounding errors rather than catching them. It’s worth noting that even well-intentioned research with real participants can mislead teams when planning, moderation, or analysis fall short – something we explored in depth in Why Your Usability Tests Are Failing. The standard for reliable insight is high whether you’re working with real users or synthetic ones.

Our position: informed use, not uncritical adoption or reflexive rejection

We’re not sceptics for scepticism’s sake. We follow the evidence, and the evidence is nuanced.

Synthetic users are a legitimate and increasingly capable tool in the research arsenal. For rapid ideation, early-stage exploration, screening questions, and pre-research orientation, they can genuinely accelerate the path to insight. The market research sector’s experience – evidenced in MRS case studies and industry data – demonstrates that in appropriate contexts, they work.

But they are not user research. They are informed speculation at scale. The distinction matters enormously when the decisions being made will affect real people.

What we hold to – what 26 years of working with real users on real products has reinforced – is that the unexpected is where the value is. The thing the participant says that stops the room. The task that takes four minutes longer than anyone anticipated, and the reason why. The behaviour that contradicts every assumption in the brief.

Those moments don’t come from a model. They come from people.

Our recommendation to clients, and our own working practice: use synthetic methods where they’re genuinely useful. Understand what they are and what they aren’t. And hold the line on the research that only real human interaction can deliver.

User Vision has been conducting human-centred research since 2000. If you’re thinking through how AI research tools fit into your research strategy, we’re happy to share our thinking.

User Vision Joins the New DOS 7 Framework - Making Public Sector UX Procurement Simpler

1 April 2026

User Vision has secured a place on the Government Commercial Agency (previously CCS) new Digital Outcomes and Specialists 7 (DOS 7) framework - making it faster and simpler than ever for public sector organisations to access specialist user research, accessibility, and user-centred design services.

Read the article: User Vision Joins the New DOS 7 Framework - Making Public Sector UX Procurement Simpler

Why Your Usability Tests Are Failing: The Hidden Pitfalls That Mislead Teams

25 February 2026

Many teams run usability tests that quietly mislead them—this article exposes the hidden pitfalls in planning, moderation and analysis, and explains how to fix them.

Read the article: Why Your Usability Tests Are Failing: The Hidden Pitfalls That Mislead Teams

AI can design a screen, but it still can’t feel one!

16 January 2026

This article has been inspired by conversations between Paul Duffy, MD of Zudu, the AI enabler for businesses, and Chris Rourke, Founder and Executive Director of User Vision, and by the learning they each took from each other’s perspectives on how AI can support UX and Design.

Read the article: AI can design a screen, but it still can’t feel one!

Explore all thoughts

Do you have a project in mind? Let’s chat about what we can do for you. Get in touch