AI will

brainstorm

your next

re-org

Matt Chmiel
Head of Discourse

This article is part of The Intelligently Artificial Issue, which combines two big stories in consumer tech: AI and CES.

Read more from the issue:

USER EXPERIENCE

Augmented Intelligence: from UX to HX

Will prompting replace browsing?

The car is the gateway drug to a voice-first acceleration

The prompt interface needs a redesign

RE-ORG

AI will brainstorm your next reorg

Expect fewer managers and direct-reports

AI is too immature for your business

AI is not a new revolution

BRAND

Should we ignore the hardware?

Can AI help consumers love your brand?

Your brand doesn't have enough data for AI

Can LLMs be optimized like search results?

Good brands will integrate more friction into their CX

From the editor: Before the launch of the Intelligently Artificial Issue, we invited Peter Smart, the global CXO of Fantasy, to give a demo of a new AI-powered audience research tool the company calls Synthetic Humans. This article is a distillation of the discourse from that event.

Digital product design does not happen in a vacuum. Designers, product owners, marketing teams, and business stakeholders all have extensive conversations with customers before, during, and after designs are ultimately shipped. This process is timely and expensive and it feeds a thriving user research industry; consumer brands pay a premium for access to real people from target audience segments to record reactions and develop concepts. The vendors and design teams then plot that feedback into thousands of slide deck pages across the land. The testers get paid, the vendor gets paid, the design staff gets approval, and the designs ultimately ship.

Here’s the thing about all of this testing: what if it’s fake? What if real people are the problem?

Real people are too human to be reliable. They lie, they cut corners, and their attention wanes. They’re in it for the money, which obscures their true opinions as they are not invested in the experience. They resist change with red-hot passion before they embrace and ultimately celebrate it. They are not useful testers.

The proliferation of user research as a design process is responsible for standardized and conventional design practices online. It is hard to produce a differentiated design when we try to meet people where they say they are.

Put bluntly, real people are a waste of time and money.

Can AI fix this?

Fantasy believes that the solution to this human problem of qualitative testing is to use AI to develop a new, scalable audience research ecosystem built on synthetic humans.

A synthetic human is a digital representation of a human being, built using an LLM that converts a massive amount of real survey data into a realistic representation of a human being. Think of it as a digital shell of a human cobbled together using thousands of psychographics data points.

Prompting a synthetic human should give you a realistic response. As a result, if you train a synthetic human to deliver feedback and reactions to developing ideas, you should get actionable audience data. These modern-day AI-generated avatars are much more powerful than a chatbot because they generate and sustain their own memories.

We are not talking about Alexa or Siri here. A synthetic human initiated with a preliminary dataset (age, demographics, location, income, job, and so on) can determine, without any other prompt, that “she” has two daughters, aged 5 and 3. These daughters have names and go to a certain school. Their teachers have names and each daughter has a favorite subject or cuddle toy.

If you don’t interact with this synthetic human for six months and then prompt “her” again, these daughters would still be in “her” mind, as would the teachers and the school. In the intervening time, the children might have celebrated a birthday, or entered the next grade, all aspects that get folded into the profile and leveraged for realistic responses. As a result, “her” opinions about your developing ideas can feel more reliable.

Organizations can train these humans to react to developing concepts, or brainstorm new concepts outright. They can also leverage their generative memory capabilities to help organizations overcome embedded workflow obstacles, like stubborn stakeholders.

Let’s say an organization knows that “Bob” in audience development has a reputation for capricious feedback that often causes a production bottleneck. The organization can train a synthetic human to brainstorm ways to overcome Bob’s reputation.

Here’s another example. Imagine prompting two contradictory synthetic humans (one is aggressive and the other is conservative) to collectively brainstorm an idea over the weekend so that you can arrive on Monday to a fresh batch of thinking. These two personalities are not just coming up with ideas; they are reacting to each other’s ideas, giving feedback, rejecting suggestions, and building on top of promising sparks.

What's the catch?

There is always a catch. And at ON_Discourse, we lean into the questions that hide underneath the inspiring claims of innovative technology. There is no denying the potential of synthetic humans. It is a direct response to the biggest issues that plague the audience research industry today. Synthetic humans can stay focused, can offer candid feedback, and can be scaled to deliver deeper insights at a lower cost. These are good things. But there are gaps in the capabilities of these tools. Our virtual discourse on November 30 unpacked some of them and thus the limitations of synthetic humans for audience research.

Synthetic humans cannot predict the future. They are locked in the snow globe of their initial configuration. Their generated memories cannot incorporate the development of novel technology or cultural revolutions. As a result, we should not expect this kind of tool to unlock perspectives for new developments. This is notable, given that we are living in an era of rapid, unpredictable change. What humans think about specific disruptions will have to come from other sources.

Synthetic humans do not access deeply human emotional states. They do not grieve. They do not get irate. They do not get horny or goofy, and they do not long after something that is just out of reach. These powerful emotions provide the source material for some of our most inspiring technical and creative accomplishments. Our guests provoked this concept with real-world examples of powerful emotional moments. There are limits to what we can expect an avatar to create – we cannot prompt a bot to dig deeper. Synthetic humans are calibrated to maintain a level set of emotions.

The issues we explored regarding synthetic humans speak more to the role of audience research than to the capabilities of this tool. The collated test results that are plotted on slide decks represent an unintentional hand-off of creative thinking to the masses. Forward thinking organizations are going to recognize the value of synthetic research for solving the achievable problems they face in design and product development. And they will leave the big thinking to the people that still run their business with their head, heart, and with their real human teams.