AI doesn't understand a damn thing.

Editor’s Note: A year ago, we talked about the rise of the so-called EQ internet, and now we are excited to share a perspective from a founder who is actually building the damn thing.

AI has a problem: it can sound intelligent, but it doesn’t understand a damn thing, not even with so-called “reasoning." We’ve spent years marveling at the fluency of Large Language Models (LLMs), watching them churn out perfectly structured paragraphs, draft polite emails, and even mimic human warmth. But here’s the reality—LLMs are linguistic illusionists, dressing up statistical predictions as understanding. And that’s where the hype stops.

Toby Daniels

Max Weidemann

The Great AI Deception

People keep asking, "When will LLMs become empathetic?" The answer is simple: never. Not as long as we’re relying on black-box models trained to predict words, not meaning. LLMs don’t feel, they don’t reason (unless you think humans reason by telling themselves to think step-by-step, then meticulously go through every step in their thoughts for 30 seconds, and then form a response), and they certainly don’t care. They spit out responses based on probability, not insight. You can ask them for life advice, and they’ll generate something that sounds right—but without any real understanding of your situation, your motives, or your emotional state.

Let’s be clear: AI-generated empathy isn’t empathy. It’s a script. It’s a formula. And the moment you need real understanding, real nuance, real depth, these systems crumble. That’s because empathy isn’t about mirroring the right words—it’s about knowing why those words matter.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

The Missing Layer: Real Emotional Intelligence

If we want AI to be truly useful in high-stakes environments—hiring, leadership, relationships, mental health—it needs more than linguistic gymnastics. It needs real emotional intelligence (EQ). That doesn’t mean programming in "compassionate" responses or tweaking outputs to sound more human. It means AI must be able to interpret personality, motivation, psychological states, and behavior over time.

LLMs can’t do that. They don’t track human patterns, they don’t learn from long-term interactions, and they certainly don’t recognize why someone might be saying one thing while meaning another. That’s what EQ-driven AI solves. Not by generating better generic responses, but by tailoring interactions to the individual—based on psychology, not word probability.

Why This Matters Everywhere

Without EQ, AI is useless in the places where human understanding actually matters. HR tech powered by LLMs? That’s just glorified keyword matching, completely missing whether a candidate fits a team’s culture or will thrive in a company’s work environment. AI-powered therapy chatbots? They can parrot self-help advice, but they can’t detect when someone is on the verge of burnout or spiraling into a depressive episode. AI in customer service? Sure, it can say "We understand your frustration," but it doesn’t understand anything.

The world doesn’t need more artificially polite chatbots. It needs AI that actually understands people—that can read between the lines, identify underlying motivations, and adapt dynamically. Otherwise, we’re just building fancier parrots that sound good but know nothing.

The Future: AI That Gets You

The next wave of AI isn’t about making LLMs sound more human—it’s about making AI think more human. That means moving beyond black-box predictions and into explainable, psychology-based models that process human emotions, intent, and long-term behavioral patterns. It means AI that doesn’t just summarize data but can tell you why someone is likely to succeed in a role, why a team is struggling, why a customer is about to churn.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

Editor’s Note: We loved this perspective from Chris Perry, one of our most literary members. It frames the current technical disruptions with a critical historical context. In addition to writing great essays, Perry literally wrote the book on AI Agents. We can’t recommend it highly enough.

The sledgehammer fell in 1992. I was twenty-two, fresh out of college, and returning to my father's world. His graphic arts studio was a place I'd known growing up but never understood until I worked there with him.

Everything had changed by the time I arrived; we just hadn’t recognized it yet. The device of destruction wasn't physical. It was digital, invisible, and ruthless. The sledgehammer was Photoshop.

Until then, my Dad was a force in town. To understand him was to understand American business before it became “virtual.” He was a social animal with ambition unbound by caution or vision. His handshakes could hurt, his laughter echoed, and his stories stretched but never broke. He ran his studio as he carried himself. It was a hothouse where people who made things happen—and made things—were gods.

For two decades, they were at the center of the advertising world. In Detroit, art met industry more than anywhere else. His crew knew the essential in between—how to make a car shine on the page in ways it never quite did on the showroom floor.

His designers, drawers, typesetters, and camera techs were craft workers. It was fitting that they worked in a creative factory with a particular feel and scent that reflected the times. The chemicals could strip paint. The smell of paper emanated fresh from the cutter. The constant cigarette smoke hung like clouds around the ceiling lights. When the automation came, the air changed.

The hammer dropped, erasing everything we knew, including the feel. It happened one pixel at a time.

Toby Daniels

Chris Perry

Gradually, Then Suddenly

As the saying goes, Photoshop’s impact hit gradually, then suddenly. Steve Jobs introduced the Macintosh in 1984 with his famous Super Bowl commercial. It featured an actual sledgehammer thrown at conformity. Jobs didn't say then that his bicycle for the mind would become a wrecking ball for certain kinds of creative work—my Dad's era of creative work.

The Mac led to new software, most notably Photoshop. The effect wasn't immediate but gained momentum by the early 1990s. Jobs' beige computer boxes started showing up on more desktops in the workplace. Once new software was loaded into them, the creative rhythms changed. The click-click-click of mouse buttons began to drown out the scratch of pencils and the squeak of Pentel marker tips.

My Dad and his band (the present company included) didn't adapt fast or fundamentally enough. Revenue and margins shrank as clients took the work in-house. We doubled down on what we knew, only accelerating our demise. A new technology, and those who knew how to use it, dismantled the business in about 24 months.

We should have seen the hammer coming because it was already there. The lesson: Not reacting to a technological wave until it breaks over you is more than just a business failure—it can be an enduring, personal failure. I promised myself never to be caught so ill-prepared again.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

Magical Automated Workflows

Photoshop turned 35 last week. If you were to imagine a single icon representing a creative transformation, it would be the "Magic Wand." One-click. That's all it took. A click and similar colors were selected as if by sorcery. Same with a click to alter the composition of an image. Ditto for envisioning variations of a scene.

What had once required careful technique and training was now available to anyone with access to the software and the patience to play and learn with it.

The magic wasn't just in what the wand could do but in what it represented. The transition from physical to digital craft offered efficiency.

Ironically, Photoshop—and the efficient workflows it led to—also came from family. Here is a bit of backstory on how it came to be.

Thomas and John Knoll grew up in a house that valued art and technology. They stood at the crossroads of two worlds, uniquely positioned to bridge them.

In 1987, Thomas Knoll wanted to display grayscale images on his Mac's black-and-white monitor. It was a practical problem with what seemed like a limited solution. His brother John, working at Industrial Light & Magic, saw further. He convinced Thomas to expand the program to handle color on the new Macintosh II. What began as a personal project caught the attention of the industry's power players.

A capability previously reserved for mainframes could now run on a PC. Adobe recognized the potential immediately, securing distribution rights in 1988 and releasing Photoshop 1.0 for Macintosh in 1990.

Photoshop didn't transform creative work in isolation. PageMaker, released by Aldus in 1985, opened the door to what would become known as desktop publishing. Photoshop and PageMaker encoded creative techniques and integrated workflows into software that creatives and producers could use directly. Those in the studio world who didn’t adapt to augmentation and changing workflows were permanently displaced.

The Magic Fades Without Imagination

Decades later, a much bigger automation wave is building. Intelligent software is reshaping all creative and knowledge work, echoing what happened in our studio.

Some automation parallels are striking. Both represent shifts from manual to digital creation and spark similar existential questions about the value of work and the identities of those who do it. With desktop publishing, page designers and typesetters rightly feared obsolescence. Today, anyone who produces knowledge, research, or creative work naturally expresses the same concerns about generative AI.

There are also critical differences between automation then and now.

Desktop publishing tools were extensions of human work. They replicated technical aspects but required direct human guidance for every decision. Generative AI tools can generate work with minimal direction, shifting human expertise from execution to curation.

Desktop tools required understanding design principles, but generative AI can produce seemingly decent output without the user knowing the underlying fundamentals.

Perhaps most significantly, the desktop publishing revolution unfolded over a decade while generative AI's capabilities are evolving at an incredible speed.

The importance of judgment, discernment, and taste in delivering commercial-grade work remains unchanged, whether we’re talking about Photoshop thirty years ago or OpenAI’s latest reasoning model today.

Consider Photoshop's meaning as a metaphor for the current moment. It automated specific, repeatable, known tasks and made technical processes faster and more accessible. However, it could not replicate the mystery of creativity or imagination, which no software has yet managed to do.

And therein lies a twist. Looking back, what reads like a family business failure doesn’t tell the whole story.

After experiencing the destruction of our business, my path reflects possibilities as a new technology destroys and creates simultaneously.

Alongside highly inventive colleagues and clients, I’ve helped create and grow new businesses built on mobile computing, e-commerce, weblogs, social networks, digital content, app development, community management, digital video, and social intelligence.

We capitalized on these tech breakthroughs not merely by understanding their specifications or original use cases but by seeing what they could lead to—by tapping into our creative capacity to imagine and bring new possibilities to life. AI is the next frontier on which to build.

Yes, AIs can encode what has been and suggest probabilities for the future. They can analyze patterns from the past with astonishing accuracy. But they cannot predict how we'll ride the next wave.

The hammer will fall, but what emerges isn’t simply destruction or dead ends. There can be a lot of light in and at the end of a transformation tunnel.

Neither my Dad nor I fully understood it in a moment of failure. It’s a reminder—then and now—that it’s hard to read the label of the jar you’re in. You have to see it from the outside.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

Are we auto-tuning ourselves into obscurity?

Editor’s Note: Henrik has a knack for provocations. After all, he coined the term ‘donkeycorns.’ In this post, he reflects on the unintended consequences of AI polishing. Do you agree with him?

Remember when auto-tune in music was subtle, a hidden trick to perfect a vocal track? Then came T-Pain and the era where auto-tune became not just visible but celebrated—a feature, not a bug. The same pattern has played out across social media: Instagram filters, touched-up LinkedIn headshots, AI-enhanced profile pictures. We've moved from hiding our digital enhancements to flaunting them.

If you use AI to polish your LinkedIn profile, it will suggest improvements to your bio, enhance your profile picture, and help craft the perfect humble brag about your recent accomplishments. The result is objectively "better"—more professional, more engaging, more likely to attract opportunities. But does it give you an edge when everyone can access the same tools?

What happens when perfection becomes commoditized? When anyone can project an idealized version of themselves? As AI makes perfect self-presentation available to everyone, the value of that perfection plummets. When anyone can generate an idealized AI headshot or have their writing polished by ChatGPT, what becomes scarce—and therefore valuable—is authenticity.

This creates a fascinating paradox: we begin manufacturing imperfection. Using costly signals to demonstrate a lack of costly signals. It wouldn't be the first time. British aristocrats historically showed their status through deliberately shabby clothing (which had to be the right kind of shabby). There's an inverse relationship between the cost of a designer handbag and the visibility of its brand mark. The ultimate flex is not needing to flex at all.

Toby Daniels

Henrik Werdelin

Perfection and Intimacy

In a world where anyone can present as perfect, imperfection becomes the new premium—but it can't be just any imperfection. It must be curated imperfection, the kind that signals authenticity without looking careless. A perfectly unpolished selfie. It's an AI-written post with just enough human awkwardness (or Danish spelling mistakes) left in. A bio that feels refreshingly unoptimized.

Our quest for perfection isn't new. Our drive for self-improvement and presenting our best selves is highly adaptive. We know intuitively that polishing our presentation can open doors and create opportunities. There's an evolutionary logic to this impulse; after all, we want to be attractive to those we wish to attract.

But we also know, bone-deep, that being truly seen and accepted for who we are—messy, imperfect, human—is what allows us to form genuine connections. Vulnerability creates intimacy. The things we try hardest to hide—our struggles, fears, and insecurities—are precisely what connect us to others. When someone trusts us with their vulnerabilities, we feel chosen, and it's only when we share our own that we feel truly known.

AI brings this ancient tension between wanting to impress and connect into sharp relief. When we can present a perfectly polished version of ourselves, we're forced to ask: What are we optimizing for? Do we want to be admired or understood? How do these choices shape who we become?

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

But we also know, bone-deep, that being truly seen and accepted for who we are—messy, imperfect, human—is what allows us to form genuine connections.

I've built an AI system to track what I eat and give me feedback. I've tried other calorie-tracking mechanisms but found I tended not to report what I wasn't proud of. That doesn't happen with this one because the AI doesn't judge if I overindulge. On the other hand, it doesn't care. At all. So I still send meal photos to my human personal trainer. In the "attention economy," AI can replicate the mechanics of attention, but not the meaning of it.

This dynamic plays out across our digital landscape. LinkedIn is likely full of posts written by ChatGPT, which get posted unread and then copy-pasted unread into ChatGPT, which produces a thoughtful comment that gets posted unread. Yet people still avidly read the AI-written comments on their AI-written posts. Why? Perhaps because even artificial attention scratches a very real itch for recognition.

Perhaps the interesting question isn't whether AI will increase isolation or intimacy, but how it will transform our understanding of connection itself. Just as social media didn't replace friendship but changed how we think about it, AI may redefine how we express our need to know and be known. The truly interesting developments will come when we stop using AI merely to make ourselves look good and start discovering what new forms of connection become possible because of it. So what about you? What will you choose to keep imperfect, and where will you autotune yourself?

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

You Came,

You Saw,

You
Discoursed

A private thank you note to our CES tour participants

Editor’s Note: Hi. It’s Chmiel. I was your tour guide, along with Toby Daniels. We wanted to give you a sense of the full ON_Discourse experience by sending you a private message full of the things you all said on our tour. As we always say, the provocation is just the start of the discourse. Here is your private (mini) discourse report straight from the floor at CES.

Privacy

This is a private link that we will not promote anywhere. This is just between you and us and each other. If there was someone you saw on tour you want to talk with, let us know and we will try to make a connection for you.

Chatham House

At the beginning of the tour, we told you we were recording our conversations. We then reviewed these recordings to understand how the group responded to our provocations on the tour. Everything you read and hear below are real things said by you all (maybe you recognize your own words?). Like everything else we do, we will keep it anonymous and remove any references to companies.

Public Report

Did you see the report we published with Stagwell? Many of the public takeaways are available there. You might see some of your anonymous quotes there as well.

Listen Up

Speaking of recordings, Toby and I recorded something special for you. It will take you back to the floor and let you hear how we play with Chatham House rules with our friends at Wondercraft, a generative audio platform. Some of our favorite lines have been given alternative voices.

0:00 / 0:00

The recap is organized around the primary stops of our tour. We parsed through 12 hours of recordings to select a few of our favorite responses. If anything resonates with you and you feel compelled to draft a public post on this, let me know and we can collaborate on a piece.

Here’s what we “overheard” on the tour…

Matt Chmiel

Matt Chmiel

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

Metaverse

Every tour had at least two parents with kids who spend real money every month on Roblox.

Claire’s built a metaverse experience in Roblox. So Claire’s is marketing to 13-15-year-olds, and a lot of their traffic was declining from malls, but by establishing Roblox as a place where kids could buy things, they were able to replace a lot of that lost revenue.

The metaverse is definitely not a gimmick.

Small Language Models

I don’t care what you say, this was the coolest exhibition on the whole floor.

I work for a steel company, and we move millions of tons of steel every day, and we're really good at making steel, but that's about like, that's where our technology ends, how we quantify, like the fish counting, like the movement of the product, so that then that can be seen by customers like we just haven't found a way to do that. That seems like the aha moment for me.

We spent too much time at the SLM booth.

What is a SLM again?

Latency is the buffering of AI — people want solutions now, not after a trip to the cloud.

This is a game-changer.

Segway and the Robot Bartender

We pushed and pushed and pushed people to have big ideas here and it didn’t work. You all took pictures of the bartender.

Does the bartender ever actually mix a drink?

I like to mow my lawn.

Glimpse Data & the Robot Barista

A few of our tours opted out of the Glimpse AI market data booth. If you did not see it, here is a brief overview: Glimpse AI uses first-party data to generate deeper and more interactive market research. The robot barista was not about the robot or the coffee, it was from a data company that sells training data to power hardware. In this case, they trained a model to understand coffee brewing so that a robot could do it.

What kind of data can they pull?

We have a ton of data on advertising that is all usable, so you understand a lot more about where advertising is showing up, and then also how it actually works.

We shouldn’t call it synthetic data. Synthetic sounds fake. We should call it proxy data.

The Flying Car

There was obviously the Italian cyber-coupe, and the off-grid RV, but the real star of the show is the flying car. I hope all of your IG followers liked and commented on your pics of it.

Is that really a flying car? Is that just a car that has a manned flight?

Actually, it's a plane that goes in the back of your car.

LG & Affectionate Intelligence

We wanted you to see this as invisible, emotional agentic tech. You all started skeptically but we could detect some converts to emotional AI and transparent screens.

Every window is a screen? This sounds like a black mirror episode.

I'm sort of terrified about the idea of a screen looking at and interpreting my mood and then adjusting and personalizing accordingly because I have a resting annoyed face, so who knows what kind of experiences that's going to provide.

Is this affectionate intelligence or just marketing jargon?

The real test for AI isn’t intelligence but invisibility.

About four weeks ago, there was a company called Home Assistant, which is like home automation technology, and they released their first product for $50 which is almost an Alexa-like product for your home but is completely offline. I feel that's also part of something where I don't want to constantly have Google and Amazon taking my data, but I love the functionality these folks offer. Is there going to be a big shift away from companies like these to completely disconnected products where you can interact with them without having to have your data?

XReal AR Glasses and Spatial Computing

Our primary stop was for XReal glasses, but our conversation along the way was about the future of spatial computing.

Are we really getting rid of screens?

The Apple Vision Pro? I thought it was amazing. But, I mean, there has to be an ecosystem of experiences built out for it. Also, it's still too heavy.

[About the Vision Pro] I was standing on a floor that disintegrated and I… really felt like I was falling.

You know what I love? I love skiing. I ski all the time. I don't want to ski on a virtual reality headset.

TCL AiME

My favorite line of the tour comes from the promotional video of AiME, the AI companion from TCL, “Ai Me loves. Human Loves.” Judging by the look of alarm on most of your faces, you were not impressed. It stimulated a very healthy debate.

That was weirder than LG.

I always think it's fascinating when you pass all these robots and you look at the eyes of every one of them. It's like, so much time and craft and attention was put into the eyes because they're all trying to create this emotional connection, and that's the foundation of it. It's like, if you're going to look at this and you look at it in the eyes, what does that feel like?

Is something like this going to enable humans to become more emotionally intelligent?

[In response] I hope so.

[In response] I actually really disagree. I think the more technology, the less emotionally intelligent people become.

Samsung

The differences between LG and Samsung were quite stark, even though both brands were emphasizing agentic experiences from their connected devices.

I think it’s smarter for Samsung to focus on security like this.

Samsung is focusing in on the connected ecosystem overall, and how it makes your life easier, but also addressing the concerns that people have with AI and with everything connected is, is my data private, is my data secure, and will it be hacked? I think Knox takes really good advantage of that concern by making sure consumers understand that this data is going to be private.

I think that storyline is really important, but also the fact that Samsung is going beyond homes and the everywhere piece with automotive, with ships.

Sony

The final point of the tour was really a refresher on the long-term value of spatial content. Sony was unveiling XYN, a new spatial ecosystem of products that you can look up. By this point of the journey, you were all physically taxed and ready to debate. We will end with a few of the most provocative questions we heard at the end of the tour, as well as a closing note from an unnamed legend in the world of advertising who had an anecdote about Sony to share.

Agentic AI portends the end of the app world.

We are not going to see, at a significant scale, more apps being built, designed, and introduced into the existing ecosystems. We're going to start to see new ecosystems emerge, whether or not it's app-based, but fundamentally AI.

The agentic era is going to be even more significant than the app area era. It is about interoperability. It is about these apps and services starting to talk to each other, but on your behalf. So it's the agent that I think is going to replace the app ultimately, and apps will just become services that are embedded into the operating systems on whatever devices that we're using.

As promised… One final note

Many years ago, there was a store in New York called The Wiz, and they were the precursors to Best Buy. They went bust. Interesting store. They were a client. And one of the things that they told us, was that Sony, at the time, was the number one manufacturer of televisions and consumer goods like VCRs, et cetera. And they told us that every day, 80% of the people who came into the store came in wanting to buy a Sony. And if you think about it, in those days people bought a new TV every four or five years. Five years, you're sitting there looking like this, and it says Sony on the screen. But only 30 to 40% of the people actually left with a Sony, because the salesperson would sell them a Toshiba, which was another popular brand at the time, and then Samsung came in. The moral of the story is to support the brand. If you don’t constantly innovate, your brand goes away.

Thank you to our partners at Stagwell for organizing the tours and bringing you along. If you want to follow their activations, go to https://www.stagwellglobal.com/.

Thank you to Wondercraft for helping us record our little message. Check them out if you want help generating audio at scale.

And finally, if you want more of the discourse, let us know by reaching out to Toby or me.

Twelve Provocations

About 2025

We do provocations, not predictions.

Matt Chmiel

Matt Chmiel

Editor’s Note: On the last Friday before Thanksgiving, we assembled members and guests for our second annual End of Year provocations call.

Predictions are boring. None of us knows the future and therefore no one cares what you think is going to happen. Your prophecy is as valuable as mine because you're as stupid about the future as me.

Another problem with predictions is that they are designed to benefit the prophet. The end result is either credit or performative humility; either way you win.

Provocations are different. There is no right or wrong, just boring or stimulating. As a result, they can take many more forms: a meaningful question, a firm statement, or ambiguous feeling.

For the second year in a row, ON_Discourse asked members to prepare a provocation - not a prediction - about the year ahead. In 60 minutes, we tackled the following 12 provocations. We hope at least one of these items stirs up a reaction in you, because this call was full of them.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

1.
Content is dead. Experience is king.

No one reads. Social posts vanish into the algorithmic void. AI has not democratized quality. It has democratized quantity. Too many creators are stuck in the old paradigm of making more, but more isn't better. Better is better, and the real magic I believe, exists in crafting immersive, adaptive experiences that matter.

This provocation did not go unchecked. It was immediately hit with pushback. "Participation in media requires a seismic change in habits, and most people are too passive for that shift."

2.
Why don’t brands do drugs?

There are no mainstream brands that talk about recreational drugs in any kind of a interesting or fun way. There are all these kind of legal psychedelic things coming up, but they're all medical, and they're really, really boring.

At best you see tangential associations with recreational drug use - see: Snoop Dog at the Olympics - but no brands seem willing to commit to this direct messaging, yet.

Someone noted: "Will Coca-Cola return to its roots?"

3.
AI won’t take jobs; it will take tasks.

AI won't eliminate jobs. That's what most people are afraid of. Still. It will just redefine them, and it's already starting to redefine them by taking over mundane tasks.

This is an optimistic expression that was not shared by some. This reaction says it all: "AI has already started taking jobs. I am seeing it. The first hits are in marketing functions, especially here in the Bay Area, where these big tech companies need to hire more AI tech talent, which is extremely expensive. They're doing that by cutting back on their marketing teams, quite simply, laying off a whole bunch of marketers and having the rest of the folks that are left in those marketing departments use their own AI tools."

4.
Streaming platforms are the new Facebook.

Streaming platforms are doing to Hollywood what Facebook did to publishers. What's happening to the studios is what happened when Facebook did with their famous pivot to video, forcing all these publishers to change their models and bend to the whims of Facebook's massive algorithm. I think the difference here is that there's more than one streaming platform.

Nobody pushed back on this one, so I won't invent an argument. I'll just build on the provocation by emphasizing the cascading effect of this system. The commodification of content is diminishing the operational model in Hollywood.

5.
I want the AI bubble to burst.

Right now, AI equals LLMs and Gen AI, but we will not revolutionize the world by generating words and fake videos. AI is much more than that.

As always, I can't tell you who said this, but I can tell you that this person is a founder in AI technology and the emphasis of looking beyond the LLM version of AI got a lot of people excited by this provocation.

6.
Sam Altman is the new Elizabeth Holmes.

The house that Sam Altman built is actually going to create a lot of mini Elizabeth Holmes; we're entering this unregulated environment where there's just going to be more fraud because general AI literacy is still low.

People in the room wanted to edit this take; the underlying theme landed in the room, but the reference was wrong. "I think FTX and SBF in particular are a better parallel because there is a technology at the foundation has real utility but no regulation yet."

7.
Google will win the search wars.

It's very easy for them to flip the switch from being a search dominant metaphor to a chat based one with ads, in the way that perplexity is trying to get towards. I think Google is going to keep winning, and it's very hard for anyone to compete with them.

This provocation had some pushback, but the most effective reaction came from another participant who agreed. This particular member run an AI service provider and has decades of experience in ML, saying: "Google has the best training data by a billion miles with every single click that everybody's done for every Google search and how long they spent on a page and where they went and what they saw when they were there. That's why their current version of search, with AI summaries, has less hallucinations. It's very rare to get hallucination in their AI search summaries. I think they will continue to crush search just because the data."

8.
AI will break the Gartner Hype Cycles.

I am sick and tired of hearing about people talking about Gartner's hype cycle, and I think that, like hype cycles aren't real. Things do not happen linearly, and I think we need a better framework to talk about what's happening right now.

Most people in the group saw some value in the hype cycle. Others gave Gartner credit with turning a banal observation about technical adoption into an evergreen marketing funnel.

9.
The Trump bump will not return for news companies.

The spectacle that inflated the value of some of the news platforms in the first term is being met with a total exhaustion of interest in news. And so now they have to come up with a way to sort of engage audiences on things outside of that.

News is entertainment and people are switching the channel. What are newsrooms to do in this environment? The bro podcast network came up; so did Jon Stewart and The Daily Show. Nobody knows what happens next.

Are the tired of the content or the outrage? The notion of fatigue kept coming up. Someone summed it up like this: "Outrage fatigue is temporary, but trust, once broken, is permanent."

10.
AI will kill the resume.

Every resume looks perfect, looks the same, every candidate feels the same kind of thing.

Everyone is having a hard time finding talent and reporting back from friends who are having a hard time finding work. The outdated notion of CVs feels like they are starting to break conventions.

11.
LLMs are the NFTs of AI.

I think that the LLM bubble bursting. I think people are realizing that LLMs can take you some of the way there or but then they hit a wall. But the AI bubble is not bursting. People are applying different AI techniques to make AI work properly, so you can use it at scale and trust it.


We have heard this notion repeated in several events. AI is bigger than LLMs and maybe 2025 will be the year we move beyond that aspect of it.

12.
We're all cowards.

I think we're afraid to disagree. I count myself in this by the way. We're afraid to call things out and put ourselves out there.

As one can imagine; people were not ready to embrace this one. A few reactions to it: "I thought that provocation sucked."

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

Group Chat Recap
11 • 08 • 24

Your eCommerce site isn't ready for an agent

eCommerce needs to focus on the plumbing first

Matt Chmiel

Matt Chmiel

Editor’s Note: We invited an AI expert to talk about the future of eCommerce in the agentic era.

Are AI agents coming to eCommerce platforms? Not if you believe the takeaways from this Group Chat. It features a bonafide AI expert with academic credentials and multiple successful ventures (including a new one that we cannot mention now, that is serving eCommerce sites).

This session paired AI optimists with eCommerce realists. It was a good one. Here are some takeaways.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

Personalization is a mirage

I think the promise of personalization that we have today on our platforms is not there. There's a lot of personalization that's happening on these sites, but it's not real, and it's certainly not a value to most of us. It's why there's so many rails that are run across a PDP or a PLP because they're like shit, maybe they’ll click on this one for no reason.

The future is better search results, not agents

The conversation was practical; before we let agentics magically interpret our mood and behavior with perfect recommendations, we should maybe focus on search results that properly understand customer intent.

One on hand this feels obvious, on the other hand, as one of our guests admitted, this is not always the case.

So getting customers to feel after getting better search results: ‘Oh yeah, that heard me.’ Like, that's amazing. We have a lot of those insights during gifting moments, whether it's holiday, Mother's Day, Father's Day, Valentine's Day, finding. But all we do right now in gift seasons, the lowest common denominator…

eCommerce tech stack is outdated

The most pervasive issue, and blocking a lot of eCommerce companies from being able to take advantage of this technology is their tech is stuck in 2010. So far what they've done now is build wrappers around this so they've put lipstick on it to make it kind of look pretty but it doesn't function.

We run a Group Chat every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

Group Chat Recap
11 • 15 • 24

AI and the Law

Finding the line between government and governance

Matt Chmiel

Matt Chmiel

Editor’s Note: We invited a legal expert on AI and copyright law to talk about the ethical and legal landscape for an AI-powered internet. We can’t tell you who was there but we can tell you what was said.

AI is the ultimate blackbox. Does that mean entrepreneurs are shielded from liability? We hosted a Group Chat that dug into this question. Here are 3 takeaways:

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

Section 230 and AI

Liability remains murky when it comes to AI-generated content. The legal shield of Section 230, which protects platforms from user-generated content liability, extends ambiguously into the AI realm.

Questions of liability scatter across the tech stack. The platform that hosts the agent; the brand that sponsors it; and the software developer that launched it all play a role in this equation.

If you are creating an agent that goes out and makes false claims about a product, the developer could claim immunity under Section 230… but this is not a certainty.

Good Governance

You have to understand what your AI does and then you have to rigorously test it against every bad-case scenario. And then you have to keep records of what you're doing.

Red teaming—stress testing AI systems to expose vulnerabilities—is emerging as a critical practice for ethical and functional deployment.

One of the interesting takeaways we heard was a distinction between communication and reporting tools.

You have to make the decision on whether you are a communication tool or a reporting tool. Then we had to make a decision about how our AI responds to extreme scenarios like suicide threats or violence.

Colorado and the States

The federal government is probably not going to set any standards for AI. The real action is happening in the states.

Emerging regulations like the Colorado AI Act reflect a trend toward greater oversight of AI's societal impacts.

If you don't know about it now, you should read up on the Colorado AI Act.

If you don't know what data is training your model by February of 26 and you know you're going to be deploying your tool in Colorado or to consumers or businesses in Colorado, you're going to be out of compliance. So you know you really need to be thinking about that.

We run a Group Chat every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

Breakfast Recap
10 • 10 • 24

Inevitable AI

The unavoidable future of generative content?

Editor’s Note: 18 Industry leaders in NYC joined us for breakfast to talk about the future of content. The word inevitability was uttered many times, generating a lot of pushback.

We hosted our breakfast in the back room at Gemma Italian Trattoria at the Bowery Hotel in New York City.

The discourse centered on this prediction about AI:

The future of content will be generative, ephemeral, and prompted by your needs, wants, and desires.

This future will come in 3 distinct phases
(we are currently in phase 1)

Phase 1

AI tools for professionals to produce more content more efficiently.

Phase 2

AI tools for non-professionals to produce professional-looking content (similar to blogs and print)

Phase 3

Brands generating consumer content based on targeted user data.

This theory does not predict a total takeover of the content supply chain; cinema will still exist; live sports will still exist; but the majority of content consumption will be generative.

Why it matters

As AI moves from assisting creators to generating personalized content in real time, we’re on the brink of a media transformation that could either revolutionize how we consume or isolate us further into echo chambers.

Matt Chmiel

Matt Chmiel

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

The table had several key reactions:

The Dystopian Risk

Some attendees warned this will create tribal bubbles, reinforcing personal biases and isolating communities. One participant pointed to political radicalization, noting AI’s role in amplifying divisive content.

"We’ll end up in a civil war before we reach this inevitability."

The Loss of Choice

Concerns were raised about algorithms controlling our consumption:

“We’re just TikTok zombies at this point.”

The fear is that AI will strip away free will, reducing us to passive consumers.

Hope for Personal Content

Others embraced the potential for wildly creative, personalized experiences, like one attendee’s dream of blending Darth Vader with Rocky in a custom AI-generated adventure.

Interactive, participatory content could reshape entertainment into a collaborative experience—where viewers not only watch but star in their own creations.

Too Much?

What happens to shared experiences? If everyone gets a different ending to the same movie, are we losing cultural touchstones? Some worried this could fragment society further, erasing the moments that bring us together.

We run closed-door events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

Group Chat Recap
10 • 03 • 24

Escaping Data Jail

AI is very quickly changing the SaaS business model

Matt Chmiel

Matt Chmiel

Editor’s Note: How much are you paying your SaaS vendor? Do you think AI - whatever that means to you - can change that cost burden and redefine your relationship? This recap is for you.

We invited 2 SaaS founders into a Zoom call with 3 other executives. I can’t tell you who was there, but you can preview some of the discourse.

The takeaways on this list might drive the next conversation you have with your SaaS vendor.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

AI Won’t Replace SaaS

SaaS is not going anywhere, but the way it has traditionally operated is about to change dramatically. One of the founders put it this way, “AI stitches together smaller, bespoke solutions,” allowing businesses to create customized workflows that fit their needs. The days of relying on one-size-fits-all platforms are over. Those on the cutting edge are using AI to personalize and streamline their SaaS stacks. If you’re not exploring how AI can take your tools to the next level, you’re already falling behind.

SaaS companies are under more financial strain than ever before.

The Golden Age of High Margins Is Fading

The pressure on SaaS margins is mounting. As interest rates rise and customer demands shift, the once-lucrative model of massive upfront spending with guaranteed long-term revenue is fading. One CEO cautioned, “SaaS companies are under more financial strain than ever before.” The key takeaway? Profitability now hinges on constant innovation and real-time value creation. SaaS clients are already eyeing exists from long-term contracts. Why is that?

AI Can Break Out of Data Jail

AI is demolishing the barriers that once made switching SaaS platforms difficult. “Data jail” is becoming a relic, with AI-driven solutions making it easier to move from one system to another. This shift has empowered businesses to pivot quickly and ditch legacy systems without the massive headaches of the past. Is your SaaS provider evolving fast enough? If not, expect to see churn rates rise as switching costs plummet.

You Still Need SaaS

The buzz around AI-generated rapid prototyping is real, but don’t be fooled. While AI makes it easier than ever to build flashy demos, turning these prototypes into robust, scalable solutions is still a major challenge. As one respected executive observed, “Prototypes are about 2% of the work.” The real value lies in operational excellence, something that can’t be replaced by a few lines of code. The smartest leaders are focusing on building sustainable, scalable systems with nimble SaaS solutions.

Do You Want to be Klarna Right Now?

They generated cutting out all SaaS contracts, but our SaaS founders were not convinced this is the right long term move.

“Klarna said we just tore out their CRM, etc. I'm really looking forward to the stories from that. I don't know Klarna at all - and no offense to anybody there - but man, that sounds like a really aggressive move. And I have a feeling that we're going to find some people leaving saying some interesting things about operations.”

Usage-Based Pricing is Coming

The era of flat monthly fees is fading, and usage-based pricing is quickly becoming the new norm, particularly in AI-driven SaaS. One of our guests put it this way, “This is going to create friction for finance teams” SaaS leaders need to prepare for a future where billing is based on usage, which means rethinking everything from budget forecasts to financial planning. If you're not ready, this shift could catch you off guard.

SaaS Agents?

The future of SaaS isn’t just about automation—it’s about intelligent, autonomous agents that can seamlessly integrate into your workflows. "SaaS is moving towards agentic architectures” but for how long?. This trend is reshaping how businesses think about software. These agents, often paired with conversational UIs, can handle complex tasks, but there’s a catch: without structured data, their effectiveness plummets. The next wave of SaaS innovation will hinge on balancing flexibility with the structured environments needed to make these agents work reliably. Those who master this dynamic will lead the pack; those who don’t will struggle to keep up.

We run a Group Chat every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

Group Chat Recap
09 • 27 • 24

'A Bleeping Nightmare'

Top-down transformation nightmares and 2 other takeaways about integrating AI into enterprises from members and guests

Editor’s Note: It was the last Friday in September and so we assembled all our members, experts and guests for our end of the month virtual event. These events are shaped by the group dynamic; we start with a plan, but the group dictates the discourse.

0:00 / 0:00

We came into this session with a plan: provoke Peter Pawlick, head of experience design at Proto. Pawlick published a 5 part series on our platform about AI simulations and his perspective deserved attention. If you haven’t read it yet, here it is in a nutshell: design thinking is dying and many brands and agencies don’t know it yet.

Design thinking is the engine that drives digital transformation. It is an endless iterative and agile process that is designed to ‘move fast and break shit.’ It is more than a method, it is a culture. And that culture is about to be replaced.

Synthetic data and other generative AI systems make it possible to preview ideas before they are designed or built. This means that an enterprise that wants to chase a north star vision can stress test 10,000 pathways to get there before deploying any capital on design or development.

In other words, move fast and break synthetic shit so the first public launch is a guaranteed success (or so the theory goes).

As I was saying, we came into the session with, dare I say, a good plan. But like all good plans, it fizzled on first contact; the group took over the discourse. I love when that happens.

The group was full of agency leaders, enterprise representatives, and founders in AI startups. In other words, all sides of the digital transformation spectrum were covered.

Here are 2 other takeaways from the group - anonymized in the way we always do:

Matt Chmiel

Matt Chmiel

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

1. The AI Firewall

We heard perspective from members who have been struggling to sell AI builds and services into corporate clients.

The only groups willing to consider these services are the marketing and customer service departments. The use-case is pretty clear: the content and services can live freely outside of the company firewall; mistakes in this instance are a minor issue rather than a compliance nightmare.

The rest of the organization is dragging its feet, for good reason. As we’ve heard in other events, AI is great at rapid prototyping but not running a product or ensuring compliance. As one insider put it, AI might work for "marketing procurement, but AI within the rest of the organization is going to be pretty slow.”

Top-down AI transformation is a fucking nightmare.

2. Realistic AI

Now let’s drill deeper: what is the conversation like for the few corporations that are willing to integrate AI deeper into the org-chart? We heard a few perspectives that might influence the way you collaborate with partners.

“AI is not a magic button” and “LLMs are not always the answer to the problem.” The real problem is that expectations for how this technology can help is not fully understood. This creates a communication issue that blows up if not addressed the right way.

One of our members put it bluntly: “Top-down AI transformation is a fucking nightmare.” In other words, employees reject it and compliance pushes back. One blocker comes in after another.

Real change is coming from the bottom-up, where employees (or agencies) are hacking together solutions, often without waiting for permission. One group is speeding up brand assets and media buying. Another group uncovered a way to track and deal with disinformation in social media using AI tools. This is opening up new service-offerings that were never before considered.

This perspective is important because it represents an agency offering for recalcitrant enterprises: do not look for the silver bullet; instead, build tools that solve specific problems. Keep adding new tools, solving new problems, eventually developing a suite of services.

We run a Group Chat every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.