You

Are Wrong,

Saneel

EXPLORE ISSUE #006

Catch up on the debate

Do not expect the next tech innovation to come out of a garage

The next internet is for the platforms

AI Will Reverse the Innovator's Dilemma Market leaders are the secret winners to AI disruption

Saneel Radia

04•01•24

You Are Wrong, Saneel AI will democratize innovation which means we will see it come from smaller companies and creators

Overheard at ON_Discourse

05•06•24

REACTION

Overheard at ON_Discourse

Editor’s note: We published Saneel’s post about innovation in early April and recently received a message that directly challenges his thesis. At ON_Discourse, we live for this kind of dialogue. It doesn’t hurt when the post has a substantive Jeff Goldblum reference.

This post was written by a human and narrated by an AI-generated voice. (powered by Wondercraft.ai).

0:00 / 0:00

Saneel, your argument is too broad. On one hand, I agree that the platforms have more resources, data, and capital to create new kinds of experiences in the next internet, but that does not mean all innovation will be coming from them. As Jeff Goldblum said, life finds a way. In this regard, the human need to create, to push, to thwart, to resist, to stand out, to communicate, and to connect will drive new experiences without the extra push that comes from the platforms.

I have seen it in my own field of luxury goods: AI tools are opening new capacities and capabilities to smaller and smaller entities. As a result of this, more and more independent creators are competing with conventional design teams. AI is scaling their inherent taste into studio-quality products.

I don’t know what it will look like; I can’t predict, and I don’t think you can either. All I know is that human creativity is getting amplified by this technology. The innovation might not come from a garage, but I do not expect it to come from the board room either.

Let’s see…

In 2025,

Media Needs

to Take Community

Back From

The Platforms

The biggest content creators can create the most valuable online communities

Editor’s note: Our own head of content and platform has a strong take about the future of media in 2025. We think his perspective has credibility;  he’s pioneered social media publishing at Reuters, edited at the Wall Street Journal; ran one of the most innovative media startups in the aughts, and his team won an Emmy for digital work at The Daily Show. His take for the future sounds like a blast from the past. We think he has a point.

This post was written by human Anthony DeRosa and narrated by AI Anthony DeRosa (powered by Wondercraft.ai).

0:00 / 0:00

The way we communicate and share our passions has drastically evolved in the past two decades. Social media platforms, operated by large tech companies, have become the central hubs for discussions on a myriad of topics, from technology to automobiles.

It shouldn’t have been this way. Media companies should own their audiences. They’ve allowed tech companies to steal their content and monetize it by providing a platform for readers to discuss it. How absurd is that? The only value add that tech companies have layered on top of the content paid for by media organizations is to shove it through a system optimized for engagement, which usually means the most polarizing and hate-read-worthy stuff gets the most eyeballs.

Now, they’re letting AI companies get away with this. AI companies, like OpenAI and Anthropic, are hoovering up massive amounts of media content, paid for by media companies, that AI platforms can then reformat into an information system to provide a new experience for the same readers who would have gone to the source for that information.

Let’s be clear, this is entirely the fault of media companies, who should have been thinking like tech companies all along and leveraging AI for their benefit instead of allowing OpenAI to take their content and raise billions off of it. Where are the tech R&D labs inside media companies? Why are they constantly 10 steps behind? Whether the reason is snobbery, hubris, or a combination of both, they failed to see the path of their salvation right in front of them. Instead, they allowed others to take their business and grow it exponentially, and without a dime to show for it.

While it seems they’re about to get mugged by OpenAI, another opportunity for media companies has emerged. Today, social media is in decay. Media companies could use this opening to take their communities back. Many social media users are now longing for a return to a more specialized, intimate form of online community. There’s a desire to return to the days when conversations about specific interests were held on dedicated media websites, where affinity and expertise, rather than algorithms, drove the discussions.

Anthony DeRosa

Anthony DeRosa

EXPLORE ISSUE #006

Who should run online communities?

Platforms, media, or maybe something else? We capture a debate...

In 2025, Media Needs to Take Back Community From The Platforms The biggest content creators can create the most valuable online communities

Anthony DeRosa

05•02•24

Media Can't Handle Community Your argument sounds right but doesn't hold up

Overheard at ON_Discourse

05•03•24

The Lost Art of Specialized Forums

There was a time when media websites were not just destinations for content, but also thriving communities where enthusiasts and experts gathered. Websites were not just sources of news and reviews but also vibrant forums for discussion and exchange. These platforms offered a sense of belonging and a shared space for individuals passionate about specific subjects. The conversations were rich, informed, and focused, providing value that oftentimes far exceeded the content of the articles themselves.

One of the best examples of this was Kinja, the publishing system built into the Gawker network. Kinja elevated comments to posts. You could find that some comments that originated under articles became the spark that led to even more discourse than the original article itself. Nick Denton, Gawker’s founder, was smart enough to realize this and built Kinja in such a way that elevated comments to the level of articles.

The quality of comments was so strong that many of Gawker’s best writers—Hamilton Nolan, Ryan Tate, Gabriel Delahaye, and Richard Lawson among them—were plucked from the comments section to become staff writers. The debate and commentary that ensued there became a farm system for some of the internet’s most interesting writers in the 2000s.

The success of early media communities stretched into the newsroom. An engaged community feeds the editorial machine with contextual, relevant perspectives guaranteed to maintain valuable audience engagement. Not only is it a good feature, it’s a good business model.

The rise of social media changed the landscape. Platforms like Facebook, Twitter, and Reddit made it easier than ever to find and participate in conversations on any topic imaginable. The barrier to entry was low, and the reach was vast. However, this accessibility came at a cost. Discussions became broader and less focused. The intimate community feeling of dedicated forums was lost, and the quality of conversations often suffered. Moreover, the algorithms that dictate what we see on social media can limit exposure to new ideas and diverse opinions, creating echo chambers.

The intimate community feeling of dedicated forums was lost, and the quality of conversations often suffered. Moreover, the algorithms that dictate what we see on social media can limit exposure to new ideas and diverse opinions, creating echo chambers.

The Challenge of Managing Community

Moderating a dedicated space to ensure discussions remain respectful, informative, and on-topic is a considerable challenge. As online discourse became increasingly polarized, moderation grew more complex and resource-intensive. Media companies faced the difficult balance of fostering free speech while preventing harassment, misinformation, and toxic behavior. For many, the risks and costs associated with maintaining these standards became too great.

Around 2015, media websites began to retreat from commenting. Media companies tend to copy each other’s strategies, if one decides comment sections are no longer useful, like lemmings they all fall into place and join the trend. In the last decade since that retreat, innovative approaches, such as integrating forums more closely with content, leveraging advanced moderation technologies, and exploring alternative revenue models, offer hope for the future. These efforts aim to recapture the sense of community and depth of discussion that specialized forums once provided, adapting them to the realities of today’s internet landscape.

Despite these challenges, there remains a significant appetite for specialized spaces among many internet users. These individuals seek out places they can dive deep into their interests with like-minded peers, away from the noise and distractions of broader social media. Recognizing this, some media companies and independent platforms are exploring new models to revive the spirit of these communities in a way that aligns with the current digital ecosystem.

Furthermore, these communities offer a level of moderation and curation often missing from sprawling social media discussions. They can provide a safer, more respectful space for exchange, free from the trolls and misinformation that plague many social networks.

Your Audience is Your Business

The longing for a return to media website forums is not just about nostalgia; it’s about recognizing the value that these communities add. When conversations are tied to media sites focused on specific subjects, the discussions are enriched by the content. They are informed by the latest articles, reviews, and analyses, creating a cycle of engagement that benefits both the readers and the websites. This environment fosters a deeper connection between users, who are drawn together by shared interests and expertise.

The challenge is for media companies to recognize the untapped potential of their online communities. Investing in these spaces, encouraging engagement, and facilitating conversations can add significant value. This goes beyond simply having a comment section under articles; it’s about creating integrated forums, hosting Q&A sessions with experts, and actively participating in discussions. Media companies have the unique advantage of being able to offer authoritative content that can anchor and stimulate conversation, something that generic social media platforms cannot replicate.

The desire to shift back to specialized forums on media websites is a call for a more meaningful online community experience. It’s an acknowledgment that while social media has its place, there is immense value in gathering spaces that are dedicated, focused, and enriched by shared interests and expertise. For those passionate about technology, cars, or any other subject, the hope is that media companies will rise to the occasion, revitalizing their community engagement efforts. In doing so, they can rekindle the sense of belonging and purpose that once defined the online discussions of enthusiasts and experts alike.

Media

Can't

Handle

Community

Your argument sounds right but doesn’t hold up

Overheard at ON_Discourse

Overheard at
ON_Discourse

Editor’s note: Anthony had a long chat with a prominent media figure who has dabbled in a lot of noteworthy online community experiences. This reaction piece reflects some of the hidden costs of pursuing a large scale community strategy. Does this make Anthony’s take wrong?

This post was written by a human and narrated by an AI-generated voice. (powered by Wondercraft.ai).

0:00 / 0:00

Legacy media outlets, with their grand history and established presence, are too stuck to foster genuine community engagement. These organizations carry the dual burdens of size and tradition, which can restrict the flexibility needed to adapt to the rapidly changing digital landscape. The culture of old media is too strict to open itself to real communities.

One of the core struggles within these institutions is the rigidity of editorial norms that do not necessarily align with the conversational, nuanced content that modern audiences gravitate towards. In contrast, platforms like podcasts allow for a meandering exploration of topics without the pressure to reach definitive conclusions. This format caters to a substantial appetite for extended discourse that is not just informative but also engaging on a personal level.

Moreover, there is often trepidation within these traditional media houses to fully embrace new forms of interaction and community-building. The fear of diluting the brand’s voice or alienating segments of an established audience can lead to conservative content strategies that ultimately inhibit genuine engagement. They don’t want to be seen as unknowing; they always want to be right. More than that, they think they have to be right in order to be relevant.

The fear of diluting the brand’s voice or alienating segments of an established audience can lead to conservative content strategies that ultimately inhibit genuine engagement.

Yet, it’s this very engagement that is crucial for the survival and growth of media institutions in the digital era. Community isn’t just about bringing people together under a common brand; it’s about fostering an environment where dialogue, interaction, and personal connection thrive. This challenge is magnified in legacy media by the need to balance respect for traditional journalistic values with the demands for more dynamic, interactive content formats.

The rise of individual content creators and smaller, more agile media entities showcases a stark contrast. These creators are not bound by the same constraints and can therefore pivot quickly, experiment more freely with content, and build intimate communities around niche topics. Their success underscores the need for larger media companies to innovate in community engagement without sacrificing the editorial integrity that has defined them.

Building community in this context requires a reevaluation of what community means in the digital age. It demands an openness to evolving how stories are told, engaging with audiences on their terms, and creating spaces for meaningful interaction.

As legacy media navigates these turbulent waters, the path forward involves a delicate balancing act: integrating new media dynamics while staying true to the core values that have sustained them through the ages. Only by doing so can they hope to not only preserve but invigorate their place in the digital world, turning their ocean liners into agile fleets ready to meet the future.

Overheard at ON_Discourse
search icon

Can LLMs be optimized like search results?

From the editor: Marketers are starting to test different LLMs with branded prompts, reminding us of the early days of SEO. We convened two experts in brands and AI to sort through their conflicting perspectives on this analogy. As always, ON_Discourse operates under the Chatham House rule—no attribution of perspectives without explicit consent.

search icon

Can LLMs be optimized like search results?

From the editor: Marketers are starting to test different LLMs with branded prompts, reminding us of the early days of SEO. We convened two experts in brands and AI to sort through their conflicting perspectives on this analogy. As always, ON_Discourse operates under the Chatham House rule—no attribution of perspectives without explicit consent.

search icon

Can LLMs be optimized like search results?

From the editor: Marketers are starting to test different LLMs with branded prompts, reminding us of the early days of SEO. We convened two experts in brands and AI to sort through their conflicting perspectives on this analogy. As always, ON_Discourse operates under the Chatham House rule—no attribution of perspectives without explicit consent.

Kind of, yes

Kind of, yes

Sometimes, language models act like certain brands always lead in their industries.

“One of the things that people ask me a bunch is whether there are specific ways they should be thinking about optimizing what they do as marketers so that the LLMs have a better sense of their brand or they become the default brand when the LLM or diffusion model imagines a car or a beer … And my answer is that I don’t think it’s anything fundamentally different.

I think basically what you do when you’re doing brand building—building distinct brand assets over time—is probably the best thing that you can do to build up the LLMs’ understanding of the brand."

Sometimes, language models act like certain brands always lead in their industries.

“One of the things that people ask me a bunch is whether there are specific ways they should be thinking about optimizing what they do as marketers so that the LLMs have a better sense of their brand or they become the default brand when the LLM or diffusion model imagines a car or a beer … And my answer is that I don’t think it’s anything fundamentally different.

I think basically what you do when you’re doing brand building—building distinct brand assets over time—is probably the best thing that you can do to build up the LLMs’ understanding of the brand."

No
way

No
way

But...

We should not be confusing LLM and SEO. They are different topics and functions.

“I think if you are trying to make your brand what an LLM responds with when asked for a default beer, then you’re fundamentally misunderstanding what this technology is, and how to leverage it – it’s likely your reference point is 'how do I get the top result in search'.”

Firstly, the LLM is not imagining a car or beer can like we might. It’s developing something based on the prompt that is entered by a user. This matters a lot. It’s then going to modify that depending on what extra knowledge it receives in further commands.

But...

We should not be confusing LLM and SEO. They are different topics and functions.

“I think if you are trying to make your brand what an LLM responds with when asked for a default beer, then you’re fundamentally misunderstanding what this technology is, and how to leverage it – it’s likely your reference point is 'how do I get the top result in search'.”

Firstly, the LLM is not imagining a car or beer can like we might. It’s developing something based on the prompt that is entered by a user. This matters a lot. It’s then going to modify that depending on what extra knowledge it receives in further commands.

Yeah, but…

Sustained marketing strategy can determine what the LLM does and that is important.

“I mostly agree, but on the flip side, there are some brands for whom this is true (they’re the default brand that the LLM 'thinks' of), and the real point is that those brands that have achieved that status did so through many years of good marketing and branding practices. These are the same brands we think of when we think of sneakers and soda: the LLM is just reflecting back this perceptual reality."

Yeah, but…

Sustained marketing strategy can determine what the LLM does and that is important.

“I mostly agree, but on the flip side, there are some brands for whom this is true (they’re the default brand that the LLM 'thinks' of), and the real point is that those brands that have achieved that status did so through many years of good marketing and branding practices. These are the same brands we think of when we think of sneakers and soda: the LLM is just reflecting back this perceptual reality."

I still don’t agree…

This is not the time to be thinking about what an LLM outputs. There are higher priority issues for marketers.

Honestly, this kind of thing should not even be on the radar of a marketer right now – they should be focused on understanding how LLMs can be used as reasoning engines, how knowledge can be leveraged and stacked, how their workflows and lives can be made easier, and marketing made more effective.”

A brand shouldn’t be worrying about being the car that the LLM 'imagines' because it’s essentially creating a mishmash of a combination of everything it has learned about what a car is when it returns the output. It’s the detailed input that goes into prompts that creates the direction – e.g. futuristic car, Ford car, etc. If a brand wants to be mentioned by an LLM, then it needs to be worried about what is inputted to drive the output.

I still don’t agree…

This is not the time to be thinking about what an LLM outputs. There are higher priority issues for marketers.

Honestly, this kind of thing should not even be on the radar of a marketer right now – they should be focused on understanding how LLMs can be used as reasoning engines, how knowledge can be leveraged and stacked, how their workflows and lives can be made easier, and marketing made more effective.”

A brand shouldn’t be worrying about being the car that the LLM 'imagines' because it’s essentially creating a mishmash of a combination of everything it has learned about what a car is when it returns the output. It’s the detailed input that goes into prompts that creates the direction – e.g. futuristic car, Ford car, etc. If a brand wants to be mentioned by an LLM, then it needs to be worried about what is inputted to drive the output.

So in conclusion...

Google has profited immensely from the marketing world’s obsession with its black-box search algorithm. This obsession unlocked a deeper technical understanding of how online search works so brands can carve out opportunities to differentiate. It is therefore logical that markers begin to explore how LLMs respond to prompts. There is an undeniable symmetry in these two technical processes.

One of the interesting takeaways from this discussion is the inherent strength legacy brands will have in this upcoming AI era. They already possess the collateral that initially fed the data set behind the model. It makes us wonder how new brands should think about this upcoming era. Is there anything they can do?

This article is part of The Intelligently Artificial Issue, which combines two big stories in consumer tech: AI and CES.

Read more from the issue:

USER EXPERIENCE

Augmented Intelligence: from UX to HX

Will prompting replace browsing?

The car is the gateway drug to a voice-first acceleration

The prompt interface needs a redesign

RE-ORG

AI will brainstorm your next reorg

Expect fewer managers and direct-reports

AI is too immature for your business

AI is not a new revolution

BRAND

Should we ignore the hardware?

Can AI help consumers love your brand?

Your brand doesn't have enough data for AI

Can LLMs be optimized like search results?

Good brands will integrate more friction into their CX

AI is too

immature

for your
business

From the editor: One of our most credentialed AI experts shared some essential perspectives for companies betting on AI. This speaker has a PhD in Brain and Cognitive Sciences, and has been working on machine learning models for over 15 years. As always, ON_Discourse operates under the Chatham House Rule—no attribution of perspectives without explicit consent.

AI is an

adolescent

This tech revolution is built on bad behavior.

AI behaves like a teenager. It is moody, unreliable, and unpredictable. It needs constant supervision.

Are you sure you want this kind of technology driving business decisions?

Without any guardrails in place, AI models trained on text scraped from the internet swear at you, call you names, go off on sexist and racist rants, and so on. These are very teenager things to do: saying something just to say something, without realizing what it actually means, how bad it is, or what the repercussions in society might be.

Companies form so-called red teams to perform adversarial testing on their AI systems. These tests try to make an AI model do all the worst stuff possible, so that the companies can then prevent it from doing that stuff when it’s talking to their users.

Many people don’t realize that a lot of the work around AI currently involves babysitting models so that the end user doesn’t realize that the tech needs to be babysat. In short, companies are hiding how incredibly immature AI still is.

Keep this in mind before, during, and after you deploy any sort of AI tech across your organization.

This article is part of The Intelligently Artificial Issue, which combines two big stories in consumer tech: AI and CES.

Read more from the issue:

USER EXPERIENCE

Augmented Intelligence: from UX to HX

Will prompting replace browsing?

The car is the gateway drug to a voice-first acceleration

The prompt interface needs a redesign

RE-ORG

AI will brainstorm your next reorg

Expect fewer managers and direct-reports

AI is too immature for your business

AI is not a new revolution

BRAND

Should we ignore the hardware?

Can AI help consumers love your brand?

Your brand doesn't have enough data for AI

Can LLMs be optimized like search results?

Good brands will integrate more friction into their CX

Expect

fewer

managers

and

‏‏‎ ‎direct‏‏‎
‏‏ reports‏‏‎ ‎‎

From the editor: This overheard passage comes from a virtual event that featured a productive disagreement between members. In this piece you’ll see two opposing perspectives about the well-worn assumption that AI will replace labor. As always, ON_Discourse operates under the Chatham House rule—no attribution of perspectives without explicit consent.

‏‏‎ ‎Yes,‏‏‎ and here's why

AI could accelerate the flattening of organizations

MBA programs didn’t prepare anyone for this. The accelerating adoption of AI tools could completely redefine how businesses organize. The first dominos to fall will be the HR and management layers. At first glance this seems like a bad thing, but it’s not…

Growing
without
headcount

Today’s model is fundamentally stupid—I don’t mean that pejoratively. Managers are incentivized to grow their teams regardless of need. Why? Having more direct reports leads to more promotions and more raises for managers. This sustains a conventional growth model that leads to overhiring and layoffs.

AI tools will disrupt the need for companies to grow their workforces by injecting intelligence and data into this dumb growth model. The knee-jerk reaction to such a shift typically sounds like, “If we don’t grow our teams, how are we ever going to get promoted and have a career path?”

This thinking is rooted in the assumption that a hierarchy and a climbing mentality are required for growth. AI tools will reveal that this is not the case, and adding people will not be seen as a requirement for revenue growth.

Instead, revenue growth could be achieved while keeping headcount flat, or even through reductions. Many non-revenue-generating roles, like in HR and middle management, will start to disappear. Active executors that have a more direct relationship with the bottom line of the organization will replace them.

Focusing
on what
matters

People managers who remain will have more time to focus on their hands-on efforts. These individuals were good enough at doing something to get promoted. Their employers will develop metrics and goals around those skills, unlocking an entrepreneurial spirit.

Instead of adding more people to manage, people managers will become more focused on outcomes. AI tools could help incentivize individual contributors based on what uniquely motivates them and the impact they can make.

On the HR side, plenty of functions could be handled by AI, possibly with less bias, more clarity, and more transparency. The processes could be opened up and deployed more equally across teams.

More layers and hierarchy only distract from revenue-generating functions and complicate decision-making.

Better
outcomes

This has important knock-on effects. Non-revenue-generating roles pull businesses further away from their customers and the outcomes they want. More layers and hierarchy only distract from revenue-generating functions and complicate decision-making, whereas fewer non-essential roles could mean more focus on translating user needs to products and services.

Some argue that AI can just help bring all the data needed for decisions to the top of the organization, but that’s backward. Why bother having such a structure? Instead, flatten your organization and empower more people to make decisions closer to the customer.

AI tools will disrupt the need for companies to grow their workforces by injecting intelligence and data into this dumb growth model. The knee-jerk reaction to such a shift typically sounds like, “If we don’t grow our teams, how are we ever going to get promoted and have a career path?”

This article is part of The Intelligently Artificial Issue, which combines two big stories in consumer tech: AI and CES.

Read more from the issue:

USER EXPERIENCE

Augmented Intelligence: from UX to HX

Will prompting replace browsing?

The car is the gateway drug to a voice-first acceleration

The prompt interface needs a redesign

RE-ORG

AI will brainstorm your next reorg

Expect fewer managers and direct-reports

AI is too immature for your business

AI is not a new revolution

BRAND

Should we ignore the hardware?

Can AI help consumers love your brand?

Your brand doesn't have enough data for AI

Can LLMs be optimized like search results?

Good brands will integrate more friction into their CX

I totally ‏‏‎ ‎disagree‏‏‎

AI will need many more managers

I agree with
you on two
important
points

I agree with you on
two important points

  1. No conventional MBA graduates in management are prepared for the AI era.
  2. The proliferation of AI products and services will change management structures.

The rest, you have backwards. The number of managers is going to grow significantly in the near future. We are going to need more people to manage the adolescent that is AI.

Adding AI products and services to the toolbox creates a new output signal that businesses need to wrangle. This in turn calls for another human layer to figure out how the AI tools are performing and how they are changing the way we ingest new information.

We are going to need more people to manage the adolescent that is AI.

AI tools could create
a lot more noise
for decision makers

Because we have more signals coming in, we will ultimately make better decisions. Those additional signals, however, require more data science teams, more linguists, more operational people, more people managing how the models are performing, more people monitoring any human bias that is going into those models, and so on.

Industry expertise will remain vital. Companies will demand subject matter experts who decipher the truth within AI-generated content.

There are limitations to what AI can produce. Only humans can figure out that last critical bit. Non-technical roles, filled by creative and culturally-aware individuals who bridge technical jargon and diverse global perspectives, will grow in importance.

In fact, we might see high demand for managers who can attract the right mix of these unique, multidisciplinary groups of people, and know how to get them to work together productively and efficiently. AI simply doesn’t have the people skills to manage divergent perspectives and disciplines into a cohesive group.

More technology will once again require more humans to manage it.

Can AI

help
consumers

love
your
brand?

From the editor: Ahead of our CES trip, we held a spirited discussion about the future of brands in the AI era. One of our most provocative discussions centered on the idea that AI functionality can unlock deeper relationships with brands. Is this true? Or are post-purchase activation schemes a rehash of old ideas?

Laura Del Greco
Founder and CEO, MUSAY

Yes, here's how...

Invest in differentiated activation experiences - with AI’s help.

Think about the brands you trust and love—the ones you rely on to do a job better than any of their competitors. If they’re smart, they’re always looking for ways to grow their relationship with you by becoming more meaningful. The stronger their relationship with you becomes, the greater their margin and pricing power.

True brand power is a high CLTV to CAC ratio and customers who say, to misquote a line from Brokeback Mountain, “I can’t quit you.”   

AI WON’T change this reality because we’re human and we want the decision shortcuts brands give us.  

AI WILL exacerbate the need for brand relationship building and give brand marketers a powerful tool with which to do it.

AI WILL ALSO muddy the waters for brands fighting for differentiation, and a share of a consumers’ heart and wallet. There is a way around this.

Brands live on a continuum from branded commodities (oatmilk, sugar, etc.) to “n of 1” companies (Instagram, Carta, Slack, Apple, etc.). While AI will impact all of these brands, from their products to how they show up in the world, it’s the brands in the middle for which AI presents the biggest threat… and the biggest opportunity for innovation and relationship building.

The muddy AI water: generative AI for brand differentiation isn’t “all that.”

So, how can brands differentiate in the AI era and how can AI help?

To answer this question, forget about technology and AI for a moment.

Human nature doesn’t change and marketers know what makes consumers tick. They also know their brand’s personality, how it lives in the world and the moments, outside of purchase, where a consumer might find the brand relevant or useful. This is basic brand activation, but done more thoughtfully so a brand can continue to endear itself to the consumer. (And if you’re thinking about activations as “personalized rewards” post-purchase, that’s cool… but that isn’t really this – or it isn’t all of this!)

Brand activation, especially the post-purchase “hours” of 8:00 PM to 12:00 AM on a brand strategy clock, help a brand break free from the sameness of AI-generated digital mind melds and put the consumer at the center to grow the relationship. Using brand activations, marketers can wrest control of AI and use it to their advantage via interactions that unlock contextual customer data and insights, which help marketers ideate ways for a brand to authentically show up in the world and be relevant in ways that extend beyond product use.

Strategic post-purchase brand activation builds a flywheel of engagement, a more efficient allocation of human and financial capital from advertising to brand engagement and, ultimately, to an increase in CLTV/CAC.

In brief, a brand team’s hard-won epiphany of how its brand is differentiated is communicated to the consumer in ways that reinforce past purchase behavior, incentivize repeat purchase behavior, and ideally make the brand part of someone’s life and perhaps their identity.

If you listen to Spotify, you were recently treated to their Wrapped experience. If you fly Delta, you just received a visual summary of all the ways Delta helped you in 2023—your most visited cities, flights, and upgrades. They also reminded you how they could help you in 2024. These weren’t fancy, but they were useful, relevant, and designed to meet the consumer, (me, in this case), where they are beyond the purchase moment.

These examples are only two of the countless ways a brand can authentically have a meaningful activation moment; the best ones will grow their relationship with the consumer. This is where AI insights can help. Traditional and social media channels can do this, but they can be expensive and aren’t designed to deliver the nuanced, relationship building, context that is possible here.

If you know me, you know I’m an early adopter who is always looking for ways to use technology to meet the consumer where they are. I look at AI as a tool to do just that. That said, there is no single “right” way to move forward, but move forward we must. After all, DVDs and CDs used to be billion dollar businesses until new technology came to town.

Even if you’re not 100% sold on the concept of AI-informed post-purchase activations and experiences, I urge you to entertain the possibility. It’s just good business.

Your opportunity

By its very definition, a brand must be differentiated enough from its competitors to outsell its competitors. 

To get the desired output of dynamic, differentiated positioning, you need dynamic differentiated input AND an ability to create connections between disparate elements. This is really hard to do—it’s a never-ending battle. There are conference rooms strewn with takeout containers and overfilled trash cans from weary product teams trying to win the war. (Coke vs. Pepsi, anyone?)

Unfortunately, differentiation isn’t enough. Today’s consumer faces a cacophony of SEO-driven messaging and data overload that threatens to water down the positioning born from years of brand research, insights, and takeout containers.

The instinct to use generative AI as a positioning tool shortcut makes sense and yet, it’s risky. Large Language Models (LLMs) are still relatively new and they’re currently processing and converting structured, unstructured, and dark data from everywhere without nuance or a critical eye. In other words, generative AI positioning is a regression to the mean and without a human’s watchful eye can be a melting pot of average.

This article is part of The Intelligently Artificial Issue, which combines two big stories in consumer tech: AI and CES.

Read more from the issue:

USER EXPERIENCE

Augmented Intelligence: from UX to HX

Will prompting replace browsing?

The car is the gateway drug to a voice-first acceleration

The prompt interface needs a redesign

RE-ORG

AI will brainstorm your next reorg

Expect fewer managers and direct-reports

AI is too immature for your business

AI is not a new revolution

BRAND

Should we ignore the hardware?

Can AI help consumers love your brand?

Your brand doesn't have enough data for AI

Can LLMs be optimized like search results?

Good brands will integrate more friction into their CX

Not really

There's no research to confirm customers want relationships with brands.

The first thing I have to say before we get into any debate about the future of AI is that no one knows what will happen. Anyone that gives you a more confident answer to questions is definitely full of shit.

Regarding your perspective: we partially agree on an important point:   

Overheard at
ON_Discourse

Anonymous under
Chatham House Rules

My agreement with you comes with a caveat: if we reduced a brand to the values in a manifesto or the colors of a logo, then the averaging out of brands in generative AI experiences would be a problem.

But… that is not how I believe brands work. I believe Byron Sharp’s theory about how brands grow. A brand emerges over time, after the audience embeds its signals in their collective memories. A generative AI model is a reflection of what is already out in the air. I’ve been experimenting with different models and am seeing that many models seem to distinguish between strong brands and vague ones.

We kind of agree that customer-facing generative experiences are homogenizing brand elements.

You place a lot of emphasis on pre and post-purchase activation. I take another angle. Differentiation will come from good creative marketing, which is less common than it should be. Strong brands have been living on the dividends that were paid thirty years ago. Now is the time for creativity. As this technology propagates, more middling brands will elevate their creative output into a generic median – good enough output. Human creativity, augmented by AI capabilities, will further distinguish strong brands from that median.

We also kind of agree that brands need to find a way to distinguish themselves from each other.

You are describing a mass personalization experience across most CPG brands. I’ll just state it like this: there is no research to suggest that consumers want a relationship with brands. The buyers of Crest of Colgate are not interested in joining a social network dedicated to whiter teeth. They buy their respective brands based on the same theories that have driven marketing for generations.

Finally, I do not agree with your point about post-purchase opportunities.

Laura Del Greco
Founder and CEO, MUSAY

Yes, they do!

Now is the time for brands to invest in differentiated activation experiences - with AI's help.

Noah, you’re right.  There are post- purchase activations that are irrelevant, annoying, time consuming, and invasive; no one wants those!  Not having insight into how the survey was conducted, my gut is that when people were asked about brand follow-up they had the annoying kind in mind.  

To quote Scott Galloway from “No Mercy / No Malice”, “brand is Latin for irrational margin”, and the best ones drive that margin by being omnipresent and meaningful, especially post purchase. It’s a way to reinforce behavior, stay top of mind, and reduce the need to advertise in conventional ways. Brands live everywhere they need to and maintain their margin via post-purchase activation and foundational brand building.  
 
Many of the people who said they didn’t want post-purchase brand engagement probably shop at an Apple store, (itself a form of post-purchase engagement), use the post-purchase activation that is the Genius Bar, and tune into Apple events. In other words, post-purchase activations can be covert and feel seamless, which is probably the best way.
 
For a covert activation combined with foundational brand building, look at the Tide. Tide’s environmental research efforts,  (e.g. cold water washing for the environment), their WWF partnership, product innovation in stain removal, and their“Loads of Hope” community program are all authentic Tide moments and reinforce what people think of Tide, and, what they think of themselves.. They’re done to keep the brand top of mind and psychologically reward purchase.  Tide appeals to people’s hearts (we help you save the planet, help others, and keep your, and your family’s, clothes clean).  

Tide’s post-purchase activations are subtle and the “distribution channels” are primarily PR and co-branding, (awards such as ”Better Homes and Gardens”) targeted towards their core consumer.  Because I’m hard to reach via advertising, and a Tide loyalist, my interaction with the brand was via a Malcom Gladwell podcast about laundry! (highly recommend)
 
Think too about Lululemon – in-store classes, free hemming, repairs ... consumers WANT this!  They just don’t think about these as post-purchase activations, so they would say “no” if asked. And if you’re a baker, King Arthur has done a spectacular job of creating a premium flour brand with their cookbooks and digital resources. (I’m a fan of this brand too!)
 
The level of activation has to be commensurate with the meaning a brand has in someone’s life and the job it does.  It’s even better if it’s seamlessly woven into someone’s life in a relevant way.  
 
For overt activation, think of any automotive brand. The brand is always in touch with you via their app, their service and their special edition mugs (that maybe someone buys!). They also rely on their sales force to help you stay wedded to their brand. Also, for overt but “in service to the consumer,” think Apple, Nike, Nespresso, Disney, Delta, Amex, etc.  
 
Can AI help any of the brand teams behind these and other brands gather insights to continue to elevate? Of course!
 
P.S. A bit closer to home I just noticed that Piano.io -  Analytics & Activation is building out a third space in the Flatiron district of NYC. I have no idea if AI was used to deliver consumer analytics that inspired that brand activation experience, but it could have!

True brand power is a high CLTV to CAC ratio and customers who say, to misquote a line from Brokeback Mountain, “I can’t quit you.”   

The

prompt interface

needs a

redesign

No, it doesn't

We need “smarter” humans, not design

Henrik Werdelin
Co-founder of BARK (BarkBox) and Founding Partner of Prehype, a venture development firm

Everybody hates chatbots. And yet, the fastest growing consumer-facing technology in human history is basically a chatbot. Is there a problem here? Is this a good moment for a design intervention?

Let me start with a fundamental question: why do people hate chatbots? Is it the design, or the experience?

Historically, conventional customer service chatbots were insanely annoying and, critically for users, fundamentally dumb. They locked users in a recursive loop of unanswerable questions that yielded no results; the only escape from this torture was to force a connection with a human.

In this moment, the human represents context and the ability to reason, while the bot represents a brick wall. So let me answer the first question: the reason people hate chatbots is the experience. The design is fine.

What happens when the conventional chatbot interface is replaced by a different backend technology? Generative AI is doing this right now by breathing artificial cognition into the brick wall, giving it the ability to convert context into reason, turning the whole interface on its head. No more need for any human intervention.

Layering generative AI into chatbots introduces two problems that a redesign cannot solve.

The first problem: humans are biased against chatbots.

They see the interface and automatically expect a conventionally dumb experience. This also happens when you see a website with a web 1.0 design. As a result, they enter into the experience with a lower level of engagement that affects the quality of their prompts. As the saying goes: garbage in, garbage out.

This article is part of The Intelligently Artificial Issue, which combines two big stories in consumer tech: AI and CES.

Read more from the issue:

USER EXPERIENCE

Augmented Intelligence: from UX to HX

Will prompting replace browsing?

The car is the gateway drug to a voice-first acceleration

The prompt interface needs a redesign

RE-ORG

AI will brainstorm your next reorg

Expect fewer managers and direct-reports

AI is too immature for your business

AI is not a new revolution

BRAND

Should we ignore the hardware?

Can AI help consumers love your brand?

Your brand doesn't have enough data for AI

Can LLMs be optimized like search results?

Good brands will integrate more friction into their CX

The second problem: humans are mostly lazy.

They imagine that a generative chatbot is going to magically read their mind and solve humanistic conscientious tasks with minimal effort. This perspective influences the quality of their prompts and the scrutiny they give to the responses. This looks a little different than the first bias: generic prompts generate generic responses that are accepted without a second thought because most don’t realize how much better the output can be. 

I have seen user testing experiments that confirm the second problem. Two user groups were asked to brainstorm creative ideas for an imaginary product. One group was given an AI chatbot while the other was given paper and pencils. The AI group produced the least interesting ideas and devoted the least amount of thinking to the assignment (until they were taught how to work optimally with the chatbot).

So let me sum up my argument: The AI chatbot interface does not need a redesign. It needs humans to catch up to its capacity. This involves something bigger than UX design.

Humans need to become better at conversation. This is going to take time, but it won’t be the first time.

I have been working on the internet since the 1990s. I spent time demonstrating internet browsers to executives who were still asking their assistants to print out webpages for them to read. Those executives were uncomfortable by the technology and simply needed more time to evolve. Going back further in the technology timeline, the keyboard and mouse triggered the same response. Sometimes, the design is not the problem and humans just need time to adapt.

Yes, it does

Design can make chat a better experience

Michael Treff
CEO, Code & Theory Co-Founder, ON_Discourse

It is hard for me to disagree with an argument that is so well-structured. But Henrik, this is ON_Discourse and we have to live by our creed. I think your approach to this topic is overlooking the impact of design and where we are in the adoption curve of AI semantic prompt-interfaces like chatbots we see in ChatGPT or text bars like we see in Google Search.

Before we get there, let me start by recognizing your strongest point:

The mouse and keyboard are an excellent analogy for generative AI chatbots. The analogy works because it acknowledges the transformational context that follows these interfaces. To put it bluntly, there was no precedent for either the mouse or generative AI at the release of either of those products. In both cases, it is natural to accept an elongated adoption curve. Sometimes, the burden just falls on people to figure it out.

But I am concerned by the focus of your argument. At the end of the day, the novelty of this tech does not relinquish the role and value of good design. Do you remember the AOL portal? For a limited time in the 1990’s, this was the predominant way most Americans accessed the internet. It had a design language so ubiquitous that it nearly defined the early-internet. Then new designs and platforms emerged for new behaviors that slowly disintegrated this design system.

When I think about the evolution of semantic chat experiences in the AI era, I see the same massive opportunity that you see, but I think of it in a different way. I’ll break it down in a few ways:

Good interface design focuses on behaviors.

The interfaces that win online are those that more accurately deliver an existing behavior in a better way. Tik-Tok pioneered a new interface for video creation and consumption without instruction manuals. Not only did this interface drive historic audience growth, it also fed an explosion of new UI paradigms for non-content experiences (from news feeds to credit card applications). Good design can accelerate the learning curve without being explicit.

At the end of the day, the novelty of this tech does not relinquish the role and value of good design.

The prompt-era is just beginning.

The use-cases for this interface are still in the proverbial laboratory. This is not the same context of the keyboard or mouse which was one of the essential catalysts of the personal computer revolution. It unlocked new ways of thinking about screen design, leading to the GUI, which ultimately brings us to our own digital discourse. AI is different. It is already propagating in the back-end operations of forward-facing companies. They are using this technology to drive internal creative sessions, augment audience research, and develop deeper customer profiles.

The jury is still out on the prompt.

I have no doubt that the prompt will become an essential interface on the internet, but I question whether it will become the dominant interface. The internet has evolved a number of optimized user experiences that are not served by an open-ended prompt experience. Let me put it this way; I’m not convinced that we are solving consumer problems by replacing context clues on a page with an open-ended prompt experience. In this vein, the customer is burdened with prompting in the right way to get the desired outcomes. I don’t feel like that is an improvement to what we have now (though I understand the impulse to consider it).

The technology behind the prompt is undoubtedly amazing and transformational. I agree that it promises to elevate our collective ability to use conversation to better understand our own needs and to find the information we need. Nevertheless, we shouldn’t assume the current interaction models are the future models. Additionally, when we ultimately deliver this tech, as with everything else that goes in front of consumers, it deserves a thoughtful, considered design.

AI will

brainstorm

your next

re-org

Matt Chmiel
Head of Discourse

This article is part of The Intelligently Artificial Issue, which combines two big stories in consumer tech: AI and CES.

Read more from the issue:

USER EXPERIENCE

Augmented Intelligence: from UX to HX

Will prompting replace browsing?

The car is the gateway drug to a voice-first acceleration

The prompt interface needs a redesign

RE-ORG

AI will brainstorm your next reorg

Expect fewer managers and direct-reports

AI is too immature for your business

AI is not a new revolution

BRAND

Should we ignore the hardware?

Can AI help consumers love your brand?

Your brand doesn't have enough data for AI

Can LLMs be optimized like search results?

Good brands will integrate more friction into their CX

From the editor: Before the launch of the Intelligently Artificial Issue, we invited Peter Smart, the global CXO of Fantasy, to give a demo of a new AI-powered audience research tool the company calls Synthetic Humans. This article is a distillation of the discourse from that event.

Digital product design does not happen in a vacuum. Designers, product owners, marketing teams, and business stakeholders all have extensive conversations with customers before, during, and after designs are ultimately shipped. This process is timely and expensive and it feeds a thriving user research industry; consumer brands pay a premium for access to real people from target audience segments to record reactions and develop concepts. The vendors and design teams then plot that feedback into thousands of slide deck pages across the land. The testers get paid, the vendor gets paid, the design staff gets approval, and the designs ultimately ship.

Here’s the thing about all of this testing: what if it’s fake? What if real people are the problem?

Real people are too human to be reliable. They lie, they cut corners, and their attention wanes. They’re in it for the money, which obscures their true opinions as they are not invested in the experience. They resist change with red-hot passion before they embrace and ultimately celebrate it. They are not useful testers.

The proliferation of user research as a design process is responsible for standardized and conventional design practices online. It is hard to produce a differentiated design when we try to meet people where they say they are.

Put bluntly, real people are a waste of time and money.

Can AI fix this?

Fantasy believes that the solution to this human problem of qualitative testing is to use AI to develop a new, scalable audience research ecosystem built on synthetic humans.

A synthetic human is a digital representation of a human being, built using an LLM that converts a massive amount of real survey data into a realistic representation of a human being. Think of it as a digital shell of a human cobbled together using thousands of psychographics data points.

Prompting a synthetic human should give you a realistic response. As a result, if you train a synthetic human to deliver feedback and reactions to developing ideas, you should get actionable audience data. These modern-day AI-generated avatars are much more powerful than a chatbot because they generate and sustain their own memories.

We are not talking about Alexa or Siri here. A synthetic human initiated with a preliminary dataset (age, demographics, location, income, job, and so on) can determine, without any other prompt, that “she” has two daughters, aged 5 and 3. These daughters have names and go to a certain school. Their teachers have names and each daughter has a favorite subject or cuddle toy.

If you don’t interact with this synthetic human for six months and then prompt “her” again, these daughters would still be in “her” mind, as would the teachers and the school. In the intervening time, the children might have celebrated a birthday, or entered the next grade, all aspects that get folded into the profile and leveraged for realistic responses. As a result, “her” opinions about your developing ideas can feel more reliable.

Organizations can train these humans to react to developing concepts, or brainstorm new concepts outright. They can also leverage their generative memory capabilities to help organizations overcome embedded workflow obstacles, like stubborn stakeholders.

Let’s say an organization knows that “Bob” in audience development has a reputation for capricious feedback that often causes a production bottleneck. The organization can train a synthetic human to brainstorm ways to overcome Bob’s reputation.

Here’s another example. Imagine prompting two contradictory synthetic humans (one is aggressive and the other is conservative) to collectively brainstorm an idea over the weekend so that you can arrive on Monday to a fresh batch of thinking. These two personalities are not just coming up with ideas; they are reacting to each other’s ideas, giving feedback, rejecting suggestions, and building on top of promising sparks.

What's the catch?

There is always a catch. And at ON_Discourse, we lean into the questions that hide underneath the inspiring claims of innovative technology. There is no denying the potential of synthetic humans. It is a direct response to the biggest issues that plague the audience research industry today. Synthetic humans can stay focused, can offer candid feedback, and can be scaled to deliver deeper insights at a lower cost. These are good things. But there are gaps in the capabilities of these tools. Our virtual discourse on November 30 unpacked some of them and thus the limitations of synthetic humans for audience research.

Synthetic humans cannot predict the future. They are locked in the snow globe of their initial configuration. Their generated memories cannot incorporate the development of novel technology or cultural revolutions. As a result, we should not expect this kind of tool to unlock perspectives for new developments. This is notable, given that we are living in an era of rapid, unpredictable change. What humans think about specific disruptions will have to come from other sources.

Synthetic humans do not access deeply human emotional states. They do not grieve. They do not get irate. They do not get horny or goofy, and they do not long after something that is just out of reach. These powerful emotions provide the source material for some of our most inspiring technical and creative accomplishments. Our guests provoked this concept with real-world examples of powerful emotional moments. There are limits to what we can expect an avatar to create – we cannot prompt a bot to dig deeper. Synthetic humans are calibrated to maintain a level set of emotions.

The issues we explored regarding synthetic humans speak more to the role of audience research than to the capabilities of this tool. The collated test results that are plotted on slide decks represent an unintentional hand-off of creative thinking to the masses. Forward thinking organizations are going to recognize the value of synthetic research for solving the achievable problems they face in design and product development. And they will leave the big thinking to the people that still run their business with their head, heart, and with their real human teams.

Are businesses

even asking the

right AI questions?

Dan Gardner

Dan Gardner

Co-Founder and Exec Chair, Code and Theory Co-Founder, ON_Discourse

Dan Gardner

Dan Gardner

Co-Founder and Exec Chair,
Code and Theory Co-Founder, ON_Discourse

This article is part of The Intelligently Artificial Issue, which combines two big stories in consumer tech: AI and CES.

Read more from the issue:

USER EXPERIENCE

Augmented Intelligence: from UX to HX

Will prompting replace browsing?

The car is the gateway drug to a voice-first acceleration

The prompt interface needs a redesign

RE-ORG

AI will brainstorm your next reorg

Expect fewer managers and direct-reports

AI is too immature for your business

AI is not a new revolution

BRAND

Should we ignore the hardware?

Can AI help consumers love your brand?

Your brand doesn't have enough data for AI

Can LLMs be optimized like search results?

Good brands will integrate more friction into their CX

The AI landscape is changing quickly. So quickly that as soon as we think we’re starting to understand its power, there seems to be another giant leap forward. It doesn’t help that we are surrounded by fake AI experts who claim to have all the answers.

This article is part of The Intelligently Artificial Issue, which combines two big stories in consumer tech: AI and CES.

Read more from the issue:

For real customer insights, ask fake people

Should we ignore the hardware?

Truthfully, given all the unknowns, there are more questions than answers today. Each question has a cascading impact on other questions and seems to only bring a series of new questions. The answers right now are only guesses, predictions at best based on hopefully thoughtful reasoning, meant to provide a productive discourse that drives perspectives and decision-making. Any verifiable answers will only reveal themselves tomorrow.

From the editor: The takes below are based on a projection toward the end of the 2020s and are intentionally opinionated and incomplete. We aim to go deeper on each question in future content and at future events, including via ON_CES.

01

What is the future of

SaaS and BI tools?

Today, businesses use siloed tech with a questionable data strategy.

Tomorrow, the SaaS ecosystem will consolidate into very few companies.

Companies of all sizes currently rely on a fragmented ecosystem of technologies. BI tools are siloed not due to the tech itself, but due to a lack of data strategy and organizational structure. People own different elements of the chain (separate teams own social, the web, and so on). This has given discreet software companies the opportunity to solve specific use cases.

The global SaaS market is projected to grow from $273.55 billion in 2023 to $908.21 billion by 2030. In the past decade, marketing software alone has grown from 150 tools in 2011 to 11,038 tools in 2023. There is a high likelihood that the ecosystem could consolidate, eliminating the opportunity and the role of discrete software systems.  

One of the core advantages of AI is its unique ability to assess, simplify, and make sense of large amounts of information. By making sense of data, AI can communicate more easily and effectively given the simplified semantic interfaces, a previous pain point for large consolidated systems.

Since the advent of ChatGPT, we’ve seen a gold rush of companies using LLMs to change the way we process, communicate, and execute decisions. This technology is evolving so quickly that we could see it swallow the companies being built upon it. This should lead to fewer, but better, SaaS and BI tools, bringing up other questions about how companies differentiate themselves and the role of humans.

02

How do we make

AI safe for businesses

to use?

Today, businesses are worried about protecting their IP and potential copyright lawsuits, moral issues, and reputational problems.

Tomorrow, businesses will rely on new legal and procedural precedents to use AI tools liberally.

There is legal precedent, there is enforcement, and there is public opinion.

Firstly, legal precedent (including regulation and policy) will naturally unfold and become clearer. One can make some guesses and predictions here, but ultimately, for competition to thrive, the laws will have to allow for some liberal use of the technologies. This can and will be argued, but it’s a fool’s game to think you can regulate away technological progress.

Secondly, as legal precedents are set, challenges will surface around identification and consequences for any wrongdoing. These will need to be policed at some level and possibly even enforced by code. Could smart contracts on the blockchain protect and enforce IP rights? Additionally, industry collectives like the RIAA, which in the early 2000s protected music IP, may form to make examples of companies and individuals breaching legal boundaries. Alternatively, AI tool use could become so common that courts won’t even consider lawsuits regarding copyright, trademarks, and reputation (although this is hard to imagine).

Lastly, public opinion could shape business use cases. Is there a risk of bias? Does a brand face risks when it represents something inaccurately or inappropriately using AI that it didn’t control? It’s hard enough to manage scaled public relations in a world where an executive is one tweet away from being fired or losing trillions in market value. We are entering a world in which ephemeral content is generated in seconds. Brands may struggle to keep up. Organizations will need to manage AI health just like they do cybersecurity.

03

How do we make

AI safe for people?

Today, businesses own people’s personal data.

Tomorrow, businesses will offer identity protection technology.

Regular audits of public companies not only verify the accuracy and legality of their financial records but also assess whether an organization has adequate controls and processes in place to mitigate potential liabilities. Future regulations may require audits on AI tools to make sure companies are operating legally, including via people controls and by reviewing code.

The government or businesses may also offer trust systems that let people authenticate their identities to interact as themselves with public and private sector services.

Alternatively, people may try to circumvent any potential censorship or gating by masking their identities.

04

Is your

competitive landscape

being

disrupted?

Today, businesses try to understand their direct competitors.

Tomorrow, businesses will be able to analyze their indirect competitors and existential threats.

AI-powered analysis tools could soon ingest data about every market globally and make connections between businesses, causing entire industries to disappear. Businesses could leverage AI to be predictive and anticipatory, uncovering opportunities that disrupt categories they never considered entering. Super apps that can do everything are a real threat.

Alternatively, categorical disruption may not happen because businesses will train AI tools on proprietary data to maintain their competitive advantage. The role of a brand will thus still matter because companies that are more focused on designing a specific user experience will be able to continue to differentiate themselves by providing unique value, while non-focused competitors won’t.

05

What is the future of

customer experience landscapes?

Today, businesses rely on traditional personalized targeting, information architecture, and rigid user flows.

Tomorrow, businesses will rely on anticipatory semantic and potential ephemeral experiences to target behaviors.

Is the interface that consists of a simple prompt text box with a response field here to stay? Or, will interfaces across various devices foresee what humans want, need, and desire, showing only the information that is relevant to users, when and where it’s required? Instead of you prompting the interface, maybe the interface will prompt you.

AI models may be able to help companies better understand their customers, viewing each one as a whole person and generating for them a more efficient, frictionless, and possibly ephemeral, experience in real time. This could include entertainment, marketing generated to pique interest, and interface elements like CTAs and drop-downs built in real time.

Alternatively, humans may not want generated content or experiences and will opt for more directed, but still personalized, user experiences.

06

What is the future of

branding?

Today, businesses build brand identities based on static logos and brand books.

Tomorrow, businesses will build brands whose attributes can be generated on the fly based on an identity that more closely resembles a human.

Brands in the future will have to act in real time. AI will be on the front lines across touchpoints, communicating dynamically with customers. The way a brand communicates at those digital moments will largely represent the whole brand.

It’s also conceivable that ad units will evolve into more dynamic product placements or other unique constructs as the world pivots to where audiences are engaging. The interactions will seemingly be more human and therefore branding will need to be more lifelike, built into a framework that can make its own decisions.

Conversely, brands could lose meaning because every business can just mirror back to people what they want, in real time.

07

Does

authenticity matter?

Today, a brand brings authenticity.

Tomorrow, ownership will be more important than authenticity.

Companies are grappling with when and how to be transparent about their use of AI tools. As AI becomes the baseline, companies that own everything they do will stand out. They may be able to build authenticity by unapologetically using AI to offer their customers what they need and want.

The key attribute will be ownership. Whether a piece of content is a deepfake won’t matter if you own the rights to use a given name, image, or likeness. How you create the content won’t be controversial. Ownership will be a core defining factor of both uniqueness and differentiation for a brand.

One could argue that a brand will be even more valuable in the future as a lot of the market consolidates and aggregators become makers. That said, ownership and uniqueness may become harder to achieve, and unique rights might become more expensive.

08

Should a business

invest quickly or slowly

in AI?

Today, businesses either invest too slowly and leave themselves open to disruption or too quickly and spend wasted capital.

Tomorrow, the landscape will be defined by where the opportunity and need for investment is.

Businesses recognize the importance of AI but often overspend due to a fear of falling behind. Despite the influx of ever-improving tools, there’s a noticeable redundancy in these so-called “innovative” ideas, hinting at future industry consolidation. On the flip side, inaction poses its own dangers, potentially leading to business stagnation and loss of a competitive edge.

Businesses should do two things immediately: build and invest in a data strategy and create a culture of safe experimentation with AI tools. The barrier to entry is surprisingly low.

09

What will

define creativity

in the

future?

Today, businesses define creativity as a skill.

Tomorrow, businesses will define creativity only in the way humans think, not what humans can do.

The advent of generative AI could fundamentally transform how we value skills, pivoting from the traditional execution of skills to an emphasis on analytical and creative thinking. The necessity for manual skills in drawing, writing, or designing may diminish, as generative AI democratizes these abilities.

Furthermore, the reliance on human intuition for decision-making could shift toward AI-driven insights and analyses, processed rapidly and impartially. This change might render prompt engineers obsolete, as AI chatbots take on the burden of generating complex prompts.

This shift could lead to a scenario where digital content and experiences are ephemeral, created spontaneously by AI, potentially reducing the significance of human creativity in producing “things." Creativity might then be seen as a common resource, easily replicable and valued less.

Conversely, the future of business creativity might lie in the ability to innovate without relying on data or existing knowledge. Human distinctiveness could emerge through the generation of novel ideas, fueled by uniquely human emotions, such as passion and envy. In this scenario, genuine creativity could become rarer, yet significantly more treasured.

Final
Thoughts

AI is going to shuffle the deck of what companies do and provide, changing the competitive landscape and ultimately the workforce. Some changes will be obvious, like new forms of creative workforces, but others won’t be, like new types of departments, roles, and C-level officers around ethics, identity, and customer safety.

Are businesses

even asking the

right AI questions?

Dan Gardner

Dan Gardner

Co-Founder and Exec Chair, Code and Theory
Co-Founder, ON_Discourse

This article is part of The Intelligently Artificial Issue, which combines two big stories in consumer tech: AI and CES.

Read more from the issue:

For real customer insights, ask fake people

Should we ignore the hardware?

The AI landscape is changing quickly. So quickly that as soon as we think we’re starting to understand its power, there seems to be another giant leap forward. It doesn’t help that we are surrounded by fake AI experts who claim to have all the answers.

Truthfully, given all the unknowns, there are more questions than answers today. Each question has a cascading impact on other questions and seems to only bring a series of new questions. The answers right now are only guesses, predictions at best based on hopefully thoughtful reasoning, meant to provide a productive discourse that drives perspectives and decision-making. Any verifiable answers will only reveal themselves tomorrow.

From the editor: The takes below are based on a projection toward the end of the 2020s and are intentionally opinionated and incomplete. We aim to go deeper on each question in future content and at future events, including via ON_CES.

01

What is the future of

SaaS and BI

tools?

Today, businesses use siloed
tech with a questionable data
strategy.

Tomorrow, the SaaS
ecosystem will consolidate
into very few companies.

Companies of all sizes currently rely on a fragmented ecosystem of technologies. BI tools are siloed not due to the tech itself, but due to a lack of data strategy and organizational structure. People own different elements of the chain (separate teams own social, the web, and so on). This has given discreet software companies the opportunity to solve specific use cases. 

The global SaaS market is projected to grow from $273.55 billion in 2023 to $908.21 billion by 2030. In the past decade, marketing software alone has grown from 150 tools in 2011 to 11,038 tools in 2023. There is a high likelihood that the ecosystem could consolidate, eliminating the opportunity and the role of discrete software systems.

One of the core advantages of AI is its unique ability to assess, simplify, and make sense of large amounts of information. By making sense of data, AI can communicate more easily and effectively given the simplified semantic interfaces, a previous pain point for large consolidated systems.

Since the advent of ChatGPT, we’ve seen a gold rush of companies using LLMs to change the way we process, communicate, and execute decisions. This technology is evolving so quickly that we could see it swallow the companies being built upon it. This should lead to fewer, but better, SaaS and BI tools, bringing up other questions about how companies differentiate themselves and the role of humans.

02

How do we make

AI safe for

businesses

to use?

Today, businesses are worried
about protecting their IP and
potential copyright lawsuits,
moral issues, and reputational
problems.

Tomorrow, businesses will
rely on new legal and procedural precedents to use
AI tools liberally.

There is legal precedent, there is enforcement, and there is public opinion.

Firstly, legal precedent (including regulation and policy) will naturally unfold and become clearer. One can make some guesses and predictions here, but ultimately, for competition to thrive, the laws will have to allow for some liberal use of the technologies. This can and will be argued, but it’s a fool’s game to think you can regulate away technological progress.

Secondly, as legal precedents are set, challenges will surface around identification and consequences for any wrongdoing. These will need to be policed at some level and possibly even enforced by code. Could smart contracts on the blockchain protect and enforce IP rights? Additionally, industry collectives like the RIAA, which in the early 2000s protected music IP, may form to make examples of companies and individuals breaching legal boundaries. Alternatively, AI tool use could become so common that courts won’t even consider lawsuits regarding copyright, trademarks, and reputation (although this is hard to imagine).

Lastly, public opinion could shape business use cases. Is there a risk of bias? Does a brand face risks when it represents something inaccurately or inappropriately using AI that it didn’t control? It’s hard enough to manage scaled public relations in a world where an executive is one tweet away from being fired or losing trillions in market value. We are entering a world in which ephemeral content is generated in seconds. Brands may struggle to keep up. Organizations will need to manage AI health just like they do cybersecurity.

03

How do we make

AI safe for

people?

Today, businesses own
people’s personal data.

Tomorrow, businesses will offer
identity protection
technology.

Regular audits of public companies not only verify the accuracy and legality of their financial records but also assess whether an organization has adequate controls and processes in place to mitigate potential liabilities. Future regulations may require audits on AI tools to make sure companies are operating legally, including via people controls and by reviewing code.

The government or businesses may also offer trust systems that let people authenticate their identities to interact as themselves with public and private sector services.

Alternatively, people may try to circumvent any potential censorship or gating by masking their identities.

04

Is your

competitive

landscape

being

disrupted?

Today, businesses try to
understand their direct
competitors.

Tomorrow, businesses will be
able to analyze their indirect
competitors and existential
threats.

AI-powered analysis tools could soon ingest data about every market globally and make connections between businesses, causing entire industries to disappear. Businesses could leverage AI to be predictive and anticipatory, uncovering opportunities that disrupt categories they never considered entering. Super apps that can do everything are a real threat.

Alternatively, categorical disruption may not happen because businesses will train AI tools on proprietary data to maintain their competitive advantage. The role of a brand will thus still matter because companies that are more focused on designing a specific user experience will be able to continue to differentiate themselves by providing unique value, while non-focused competitors won’t.

05

What is the future of

customer

experience

interfaces?

Today, businesses rely on traditional personalized targeting, information architecture, and rigid user flows.

Tomorrow, businesses will rely on anticipatory semantic and potential ephemeral experiences to target behaviors.

Is the interface that consists of a simple prompt text box with a response field here to stay? Or, will interfaces across various devices foresee what humans want, need, and desire, showing only the information that is relevant to users, when and where it’s required? Instead of you prompting the interface, maybe the interface will prompt you.

AI models may be able to help companies better understand their customers, viewing each one as a whole person and generating for them a more efficient, frictionless, and possibly ephemeral, experience in real time. This could include entertainment, marketing generated to pique interest, and interface elements like CTAs and drop-downs built in real time.

Alternatively, humans may not want generated content or experiences and will opt for more directed, but still personalized, user experiences.

06

What is the future of

branding?

Today, businesses build brand
identities based on static
logos and brand books.

Tomorrow, businesses will
build brands whose attributes
can be generated on the fly
based on an identity that more
closely resembles a human.

Brands in the future will have to act in real time. AI will be on the front lines across touchpoints, communicating dynamically with customers. The way a brand communicates at those digital moments will largely represent the whole brand.

It’s also conceivable that ad units will evolve into more dynamic product placements or other unique constructs as the world pivots to where audiences are engaging. The interactions will seemingly be more human and therefore branding will need to be more lifelike, built into a framework that can make its own decisions.

Conversely, brands could lose meaning because every business can just mirror back to people what they want, in real time.

07

Does

authenticity

matter?

Today, a brand brings
authenticity.

Tomorrow, ownership will be
more important than
authenticity.

Companies are grappling with when and how to be transparent about their use of AI tools. As AI becomes the baseline, companies that own everything they do will stand out. They may be able to build authenticity by unapologetically using AI to offer their customers what they need and want.

The key attribute will be ownership. Whether a piece of content is a deepfake won’t matter if you own the rights to use a given name, image, or likeness. How you create the content won’t be controversial. Ownership will be a core defining factor of both uniqueness and differentiation for a brand.

One could argue that a brand will be even more valuable in the future as a lot of the market consolidates and aggregators become makers. That said, ownership and uniqueness may become harder to achieve, and unique rights might become more expensive.

08

Should a business

invest quickly

or slowly

in AI?

Today, businesses either
invest too slowly and leave
themselves open to
disruption or too quickly and
spend wasted capital.

Tomorrow, the landscape will
be defined by where the
opportunity and need for
investment is.

Businesses recognize the importance of AI but often overspend due to a fear of falling behind. Despite the influx of ever-improving tools, there’s a noticeable redundancy in these so-called “innovative” ideas, hinting at future industry consolidation. On the flip side, inaction poses its own dangers, potentially leading to business stagnation and loss of a competitive edge.

Businesses should do two things immediately: build and invest in a data strategy and create a culture of safe experimentation with AI tools. The barrier to entry is surprisingly low.

09

What will

define

creativity

in the

future?

Today, businesses define
creativity as a skill.

Tomorrow, businesses will
define creativity only in the
way humans think,
not what humans can do.

The advent of generative AI could fundamentally transform how we value skills, pivoting from the traditional execution of skills to an emphasis on analytical and creative thinking. The necessity for manual skills in drawing, writing, or designing may diminish, as generative AI democratizes these abilities.

Furthermore, the reliance on human intuition for decision-making could shift toward AI-driven insights and analyses, processed rapidly and impartially. This change might render prompt engineers obsolete, as AI chatbots take on the burden of generating complex prompts.

This shift could lead to a scenario where digital content and experiences are ephemeral, created spontaneously by AI, potentially reducing the significance of human creativity in producing “things." Creativity might then be seen as a common resource, easily replicable and valued less.

Conversely, the future of business creativity might lie in the ability to innovate without relying on data or existing knowledge. Human distinctiveness could emerge through the generation of novel ideas, fueled by uniquely human emotions, such as passion and envy. In this scenario, genuine creativity could become rarer, yet significantly more treasured.

Final

Thoughts

AI is going to shuffle the deck of what companies do and provide, changing the competitive landscape and ultimately the workforce. Some changes will be obvious, like new forms of creative workforces, but others won’t be, like new types of departments, roles, and C-level officers around ethics, identity, and customer safety.

Learn About
ON_CES

Learn About
THE Issue