Editor’s Note: One of the most common references we hear in all of our events is due for a take-down. Our co-founder Toby does his best work when he’s fired up. Check it out and let us know if he is leading you into a 'trough of disillusionment' or a 'plateau of provocations.'

The Gartner Hype Cycle has become the go-to cliché in tech. Not only does this lead to boring, useless perspectives, but it also relies on an outdated research methodology that makes no sense.

To the uninitiated, the Gartner Hype Cycle is a 30-year-old research method that tracks tech adoption over time. Despite its title, the hype cycle is an undulating line (not a cycle) that tracks emerging ideas from the so-called “innovation trigger” through the “peak of inflated expectations” beyond the “trough of disillusionment” and up the “slope of enlightenment” until it reaches the “plateau of productivity.”

If this sounds like the narrative arc of a hero’s journey, that is because the hype cycle is based on fiction. You can read about its origins from its creator here. It was designed to help organizations time and calibrate their investment in developing technology accordingly. The key idea: Early adopters should expect a dip in enthusiasm — the so-called trough of disillusionment — requiring patience and capital that will eventually work its way up into the plateau of productivity.

To be clear, I’m not here to take down Gartner. They successfully leveraged an unprovable narrative arc as a brilliant marketing tool for their services. It’s like a technology horoscope that sounds right 65% of the time. It served its purpose, but now we’re dealing with something bigger.

AI transformation is bigger than hype. It is a super trend. If we plot it on a linear curve, we are obscuring much more interesting considerations.

To navigate AI transformation, leaders need a framework that embraces uncertainty and complexity. I’ve spent the past year interviewing dozens of AI leaders and innovation experts. Their thoughts and observations fit into three states of AI transformation: Possible, Potential and Proven.

Here’s how to envision these three states and the one essential provocation leaders should be asking themselves:

Toby Daniels

Toby Daniels

First State:

Things That Are Possible

This is AI’s realm of imagination, where ideas spark but haven’t yet materialized into practical use cases. Theoretical concepts like Artificial General Intelligence (AGI) exist here. Think Hal from 2001. AGI will have human-like cognitive ability.

OpenAI’s focus on AGI reflects the company’s ambition to tackle problems that are decades, if not centuries, away from being solved. And IBM’s neuromorphic computing explorations into brain-inspired chips like TrueNorth aim to mimic human cognition. These chips promise transformative computing capabilities but remain highly experimental. In the First State, research papers and experimental algorithms dominate the landscape, and progress is measured in breakthroughs, not revenue.

If you’re investing here, are you betting on the future or indulging in a fantasy?

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

Second State:

Things That Have Potential

Here, AI technologies are emerging from academic journals and proof-of-concept stages into early-market experimentation. They’ve demonstrated value but haven’t yet reached a point of reliability or scalability. This is where buzzwords often overshadow results, and visionary leaders must decide how to allocate resources to support fragile ideas.

Tesla’s full self-driving vehicles exemplify this potential. They function in controlled scenarios but remain far from delivering consistent, regulatory-approved results.

Stripe fraud detection with AI, which uses AI to detect and mitigate fraud in online payments, also lives in the Second State. While effective, it requires continuous refinement to adapt to new threats and remain reliable at scale.

Are you willing to nurture fragile, early-stage innovations, or are you only here for the immediate return on investment?

Third State:

Things That Are Proven

This is AI’s gold rush — the domain of scaled, reliable technologies delivering measurable value. Companies here have turned AI from a speculative bet into a fundamental driver of their operations and profits.

Amazon’s AI-driven recommendation algorithms are foundational to the company’s success, influencing customer purchases and optimizing logistics.

Siemens’ predictive maintenance systems in manufacturing ensure operational efficiency, reducing downtime and saving billions annually. And UPS’ AI-optimized logistics, dubbed On-Road Integrated Optimization and Navigation or ORION, optimizes delivery routes in real time, saving millions in fuel and time.

Will you leverage these for the present while building for the future? Or will you let comfort become your cage?

Three Challenges AI Leaders Must Accept

Being an AI leader capable of handling the three states of AI transformation isn’t about frameworks, acronyms or decks.It’s about who you are amid ambiguity, pressure and doubt.

The most productive conversations I’ve had over the past year have involved the three components below. If you drive AI transformation in your enterprise, ask yourself:

1. Can you embrace being wrong?

Leading AI means making calls with incomplete information. You will fail. The question isn’t whether you’ll stumble — it’s whether you’ll learn fast enough to stay in the race. When was the last time you admitted a mistake to your team? If you can’t remember, you’re already in trouble.

2. Are you ready to rethink your identity?

You’re not a decision-maker; you’re a decision-shaper. Your role isn’t to control outcomes — it’s to create environments where great outcomes emerge.How often do you let your team’s experiments challenge your instincts?

3. Can you manage fear — yours and theirs?

AI is a pressure cooker. It amplifies anxieties about jobs, ethics and the future. You can’t outsource courage to a playbook. Leadership means stepping into those conversations, not avoiding them.Have you addressed your team’s fears about AI — or have you assumed their silence means support?

This moment requires breaking out of the regimented ways of thinking. Much like music, classical leadership thrives on precision and control. What’s needed is jazz leadership, which thrives on responsiveness and improvisation. AI’s pace means leaders must constantly adapt to new rhythms and riff on emerging opportunities.

Expecting AI to follow the classical path of the Gartner Hype Cycle will only lead you into a cacophony of failure.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

AI doesn't understand a damn thing.

Editor’s Note: A year ago, we talked about the rise of the so-called EQ internet, and now we are excited to share a perspective from a founder who is actually building the damn thing.

AI has a problem: it can sound intelligent, but it doesn’t understand a damn thing, not even with so-called “reasoning." We’ve spent years marveling at the fluency of Large Language Models (LLMs), watching them churn out perfectly structured paragraphs, draft polite emails, and even mimic human warmth. But here’s the reality—LLMs are linguistic illusionists, dressing up statistical predictions as understanding. And that’s where the hype stops.

Toby Daniels

Max Weidemann

The Great AI Deception

People keep asking, "When will LLMs become empathetic?" The answer is simple: never. Not as long as we’re relying on black-box models trained to predict words, not meaning. LLMs don’t feel, they don’t reason (unless you think humans reason by telling themselves to think step-by-step, then meticulously go through every step in their thoughts for 30 seconds, and then form a response), and they certainly don’t care. They spit out responses based on probability, not insight. You can ask them for life advice, and they’ll generate something that sounds right—but without any real understanding of your situation, your motives, or your emotional state.

Let’s be clear: AI-generated empathy isn’t empathy. It’s a script. It’s a formula. And the moment you need real understanding, real nuance, real depth, these systems crumble. That’s because empathy isn’t about mirroring the right words—it’s about knowing why those words matter.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

The Missing Layer: Real Emotional Intelligence

If we want AI to be truly useful in high-stakes environments—hiring, leadership, relationships, mental health—it needs more than linguistic gymnastics. It needs real emotional intelligence (EQ). That doesn’t mean programming in "compassionate" responses or tweaking outputs to sound more human. It means AI must be able to interpret personality, motivation, psychological states, and behavior over time.

LLMs can’t do that. They don’t track human patterns, they don’t learn from long-term interactions, and they certainly don’t recognize why someone might be saying one thing while meaning another. That’s what EQ-driven AI solves. Not by generating better generic responses, but by tailoring interactions to the individual—based on psychology, not word probability.

Why This Matters Everywhere

Without EQ, AI is useless in the places where human understanding actually matters. HR tech powered by LLMs? That’s just glorified keyword matching, completely missing whether a candidate fits a team’s culture or will thrive in a company’s work environment. AI-powered therapy chatbots? They can parrot self-help advice, but they can’t detect when someone is on the verge of burnout or spiraling into a depressive episode. AI in customer service? Sure, it can say "We understand your frustration," but it doesn’t understand anything.

The world doesn’t need more artificially polite chatbots. It needs AI that actually understands people—that can read between the lines, identify underlying motivations, and adapt dynamically. Otherwise, we’re just building fancier parrots that sound good but know nothing.

The Future: AI That Gets You

The next wave of AI isn’t about making LLMs sound more human—it’s about making AI think more human. That means moving beyond black-box predictions and into explainable, psychology-based models that process human emotions, intent, and long-term behavioral patterns. It means AI that doesn’t just summarize data but can tell you why someone is likely to succeed in a role, why a team is struggling, why a customer is about to churn.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

AI

TRANSFORMATION

Solve Small

and

Think Like

a Founder

Don't fall for the big talk

Editor’s note: Our Co-Founder has developed this perspective about AI transformation after hearing countless talks about the so-called AI revolution. We think it’s such a good break from the conventional approach to AI adoption that we organized a summit around it.

Artificial Intelligence is sold to the C-suite as transformation at scale—a revolution in business, a redefinition of the workforce, a paradigm shift. Every AI keynote, whitepaper, and corporate summit emphasizes “epic transformation,” the kind that reshapes industries overnight.

But here’s the truth: AI transformation rarely happens in a single leap. Instead, it evolves through a series of incremental, often messy, small-scale shifts. And it’s in those smaller moves—often overlooked in corporate case studies—where AI’s true impact is being felt. This is the Solve Small approach: focusing on targeted, bottom-up AI interventions that remove inefficiencies while preserving the human touch where it matters most.

This is where AI transformation mirrors the way great founders run their companies. Conventional business wisdom says that scaling an organization requires distributing decision-making, adding layers of management, and diffusing control. Yet, the most effective founder-led companies—like Apple, Airbnb, Shopify, and Nvidia—reject this model. Instead, they remain deeply involved in the details that matter, ensuring that speed, adaptability, and clarity drive their organization forward. AI transformation requires the same approach: high-touch, iterative, and deeply embedded within the business.

Toby Daniels

Toby Daniels

Founder, ON_Discourse, former Chief Innovation Officer, Adweek, Founder and Chair, Social Media Week

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

The Problem With Epic AI

The way we talk about AI inside boardrooms is broken. The discourse is full of sweeping, cinematic narratives—AI will “reinvent how we work,” “unlock human potential,” and “create limitless efficiency.” Yet, this kind of hype obscures the real work required to integrate AI successfully.

Consider the C-suite executive who leaves an AI conference with visions of radical automation, only to return to an organization struggling with basic data hygiene. Or the startup founder promising a fully autonomous AI-powered workflow, only to realize that employees don’t trust AI-generated insights. The gap between expectation and execution is vast because the AI discourse favors spectacle over substance.

This is why AI transformation should be approached like a founder running their company—not through bureaucratic committees and abstract strategies, but through direct involvement, rapid iteration, and a relentless focus on solving small, meaningful problems.

Thinking Like a Founder: AI Transformation in Small, Impactful Steps

The best founder-led companies thrive because they embrace hands-on decision-making and fast, iterative improvements. AI adoption should follow a similar model. Here’s what that looks like in practice:

1.

Solve Small: Incremental Change That Compounds Over Time

Great founders don’t overhaul their entire organization overnight; they make continuous, strategic adjustments. AI transformation should follow the same principle. The most effective AI-driven businesses treat AI like compounding interest—small investments that build on each other:

  • A sales team starts with AI-assisted meeting transcriptions, then layers in automated CRM updates, and later integrates predictive sales forecasting.
  • A manufacturing plant implements AI for maintenance logs, extends it to predictive downtime prevention, and eventually integrates it into supply chain optimization.

Like a founder iterating on product development, AI transformation isn’t about flipping a switch—it’s about stacking small improvements until they create something larger than the sum of their parts.

2.

AI as a Bottom-Up, Ground-Level Initiative

The best ideas don’t always come from leadership—they emerge from people closest to the work. Founder-led organizations like Nvidia empower employees at every level to share insights directly with leadership. AI adoption should work the same way:

  • A call center rep starts using ChatGPT to summarize support tickets before management even considers AI integration.
  • A junior designer leverages AI-generated layouts to speed up work, improving both quality and output.
  • A coder uses AI-assisted debugging not because leadership mandated it, but because it’s simply faster and more efficient.

AI initiatives should mirror the Solve Small model—where leadership listens, learns, and scales what works, rather than imposing AI from the top down.

3.

AI as a Fast, Iterative Process

Founder-led companies don’t rely on long planning cycles. Airbnb’s Brian Chesky eliminated unnecessary layers of management and engaged directly with product teams to make faster, better decisions. AI transformation should follow the same principle:

  • A legal team pilots AI contract review with one clause at a time rather than automating the entire process at once.
  • A retail company A/B tests AI-generated product descriptions for a subset of SKUs before rolling it out across the catalog.
  • A logistics firm implements AI-driven route optimization for a single delivery region before expanding nationwide.

Successful AI adoption moves at the pace of iteration, not perfection.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

4.

AI as Local, Not Just Enterprise-Wide

Not every AI innovation needs massive cloud infrastructure. The most impactful AI-driven improvements happen at the local level—on an individual’s laptop, phone, or department-specific system:

  • A doctor using AI for voice-to-text medical notes on their own device, rather than a hospital-wide AI integration.
  • A journalist using AI summarization locally for research without relying on centralized editorial AI mandates.
  • A salesperson using an AI-powered meeting assistant that operates on their phone, rather than waiting for IT to implement a corporate-wide AI tool.

5.

AI as Specific, Not Broad

Great founders don’t try to do everything at once. They focus on solving one problem exceptionally well before expanding. AI transformation should be approached the same way:

  • AI for one type of document scanning (e.g., invoices) works better than trying to automate all document types at once.
  • AI in one language model per department (e.g., legal vs. marketing) avoids generic, diluted results.
  • AI that refines a single metric (e.g., reducing customer service handle time) often outperforms AI designed to “optimize” an entire workflow.

6.

AI as an Invisible, Seamless Part of Work

Founder-led companies prioritize clarity—teams work best when they know exactly what to focus on. AI should operate the same way: it should be so seamlessly integrated that it disappears into the workflow.

  • AI-powered email filters reduce spam and prioritize important messages.
  • AI-driven search ranking surfaces better results without users thinking about it.
  • AI-enhanced writing suggestions feel like part of the workflow, not a separate tool.

The Future of AI Is Solve Small—And Founder-Driven

Big AI transformation stories will always dominate headlines, but in reality, the organizations that win will be the ones that think like great founders—staying hands-on, moving fast, and solving small, again and again, until the transformation is undeniable.

If business leaders want to “go big” on AI, they should start by solving small—and staying directly involved every step of the way. This is why AI transformation should be approached like a founder running their company—not through bureaucratic committees and abstract strategies, but through direct involvement, rapid iteration, and a relentless focus on solving small, meaningful problems.

Interested in attending the Summit? Learn more and request an invitation here.

Editor’s Note: We loved this perspective from Chris Perry, one of our most literary members. It frames the current technical disruptions with a critical historical context. In addition to writing great essays, Perry literally wrote the book on AI Agents. We can’t recommend it highly enough.

The sledgehammer fell in 1992. I was twenty-two, fresh out of college, and returning to my father's world. His graphic arts studio was a place I'd known growing up but never understood until I worked there with him.

Everything had changed by the time I arrived; we just hadn’t recognized it yet. The device of destruction wasn't physical. It was digital, invisible, and ruthless. The sledgehammer was Photoshop.

Until then, my Dad was a force in town. To understand him was to understand American business before it became “virtual.” He was a social animal with ambition unbound by caution or vision. His handshakes could hurt, his laughter echoed, and his stories stretched but never broke. He ran his studio as he carried himself. It was a hothouse where people who made things happen—and made things—were gods.

For two decades, they were at the center of the advertising world. In Detroit, art met industry more than anywhere else. His crew knew the essential in between—how to make a car shine on the page in ways it never quite did on the showroom floor.

His designers, drawers, typesetters, and camera techs were craft workers. It was fitting that they worked in a creative factory with a particular feel and scent that reflected the times. The chemicals could strip paint. The smell of paper emanated fresh from the cutter. The constant cigarette smoke hung like clouds around the ceiling lights. When the automation came, the air changed.

The hammer dropped, erasing everything we knew, including the feel. It happened one pixel at a time.

Toby Daniels

Chris Perry

Gradually, Then Suddenly

As the saying goes, Photoshop’s impact hit gradually, then suddenly. Steve Jobs introduced the Macintosh in 1984 with his famous Super Bowl commercial. It featured an actual sledgehammer thrown at conformity. Jobs didn't say then that his bicycle for the mind would become a wrecking ball for certain kinds of creative work—my Dad's era of creative work.

The Mac led to new software, most notably Photoshop. The effect wasn't immediate but gained momentum by the early 1990s. Jobs' beige computer boxes started showing up on more desktops in the workplace. Once new software was loaded into them, the creative rhythms changed. The click-click-click of mouse buttons began to drown out the scratch of pencils and the squeak of Pentel marker tips.

My Dad and his band (the present company included) didn't adapt fast or fundamentally enough. Revenue and margins shrank as clients took the work in-house. We doubled down on what we knew, only accelerating our demise. A new technology, and those who knew how to use it, dismantled the business in about 24 months.

We should have seen the hammer coming because it was already there. The lesson: Not reacting to a technological wave until it breaks over you is more than just a business failure—it can be an enduring, personal failure. I promised myself never to be caught so ill-prepared again.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

Magical Automated Workflows

Photoshop turned 35 last week. If you were to imagine a single icon representing a creative transformation, it would be the "Magic Wand." One-click. That's all it took. A click and similar colors were selected as if by sorcery. Same with a click to alter the composition of an image. Ditto for envisioning variations of a scene.

What had once required careful technique and training was now available to anyone with access to the software and the patience to play and learn with it.

The magic wasn't just in what the wand could do but in what it represented. The transition from physical to digital craft offered efficiency.

Ironically, Photoshop—and the efficient workflows it led to—also came from family. Here is a bit of backstory on how it came to be.

Thomas and John Knoll grew up in a house that valued art and technology. They stood at the crossroads of two worlds, uniquely positioned to bridge them.

In 1987, Thomas Knoll wanted to display grayscale images on his Mac's black-and-white monitor. It was a practical problem with what seemed like a limited solution. His brother John, working at Industrial Light & Magic, saw further. He convinced Thomas to expand the program to handle color on the new Macintosh II. What began as a personal project caught the attention of the industry's power players.

A capability previously reserved for mainframes could now run on a PC. Adobe recognized the potential immediately, securing distribution rights in 1988 and releasing Photoshop 1.0 for Macintosh in 1990.

Photoshop didn't transform creative work in isolation. PageMaker, released by Aldus in 1985, opened the door to what would become known as desktop publishing. Photoshop and PageMaker encoded creative techniques and integrated workflows into software that creatives and producers could use directly. Those in the studio world who didn’t adapt to augmentation and changing workflows were permanently displaced.

The Magic Fades Without Imagination

Decades later, a much bigger automation wave is building. Intelligent software is reshaping all creative and knowledge work, echoing what happened in our studio.

Some automation parallels are striking. Both represent shifts from manual to digital creation and spark similar existential questions about the value of work and the identities of those who do it. With desktop publishing, page designers and typesetters rightly feared obsolescence. Today, anyone who produces knowledge, research, or creative work naturally expresses the same concerns about generative AI.

There are also critical differences between automation then and now.

Desktop publishing tools were extensions of human work. They replicated technical aspects but required direct human guidance for every decision. Generative AI tools can generate work with minimal direction, shifting human expertise from execution to curation.

Desktop tools required understanding design principles, but generative AI can produce seemingly decent output without the user knowing the underlying fundamentals.

Perhaps most significantly, the desktop publishing revolution unfolded over a decade while generative AI's capabilities are evolving at an incredible speed.

The importance of judgment, discernment, and taste in delivering commercial-grade work remains unchanged, whether we’re talking about Photoshop thirty years ago or OpenAI’s latest reasoning model today.

Consider Photoshop's meaning as a metaphor for the current moment. It automated specific, repeatable, known tasks and made technical processes faster and more accessible. However, it could not replicate the mystery of creativity or imagination, which no software has yet managed to do.

And therein lies a twist. Looking back, what reads like a family business failure doesn’t tell the whole story.

After experiencing the destruction of our business, my path reflects possibilities as a new technology destroys and creates simultaneously.

Alongside highly inventive colleagues and clients, I’ve helped create and grow new businesses built on mobile computing, e-commerce, weblogs, social networks, digital content, app development, community management, digital video, and social intelligence.

We capitalized on these tech breakthroughs not merely by understanding their specifications or original use cases but by seeing what they could lead to—by tapping into our creative capacity to imagine and bring new possibilities to life. AI is the next frontier on which to build.

Yes, AIs can encode what has been and suggest probabilities for the future. They can analyze patterns from the past with astonishing accuracy. But they cannot predict how we'll ride the next wave.

The hammer will fall, but what emerges isn’t simply destruction or dead ends. There can be a lot of light in and at the end of a transformation tunnel.

Neither my Dad nor I fully understood it in a moment of failure. It’s a reminder—then and now—that it’s hard to read the label of the jar you’re in. You have to see it from the outside.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

Are we auto-tuning ourselves into obscurity?

Editor’s Note: Henrik has a knack for provocations. After all, he coined the term ‘donkeycorns.’ In this post, he reflects on the unintended consequences of AI polishing. Do you agree with him?

Remember when auto-tune in music was subtle, a hidden trick to perfect a vocal track? Then came T-Pain and the era where auto-tune became not just visible but celebrated—a feature, not a bug. The same pattern has played out across social media: Instagram filters, touched-up LinkedIn headshots, AI-enhanced profile pictures. We've moved from hiding our digital enhancements to flaunting them.

If you use AI to polish your LinkedIn profile, it will suggest improvements to your bio, enhance your profile picture, and help craft the perfect humble brag about your recent accomplishments. The result is objectively "better"—more professional, more engaging, more likely to attract opportunities. But does it give you an edge when everyone can access the same tools?

What happens when perfection becomes commoditized? When anyone can project an idealized version of themselves? As AI makes perfect self-presentation available to everyone, the value of that perfection plummets. When anyone can generate an idealized AI headshot or have their writing polished by ChatGPT, what becomes scarce—and therefore valuable—is authenticity.

This creates a fascinating paradox: we begin manufacturing imperfection. Using costly signals to demonstrate a lack of costly signals. It wouldn't be the first time. British aristocrats historically showed their status through deliberately shabby clothing (which had to be the right kind of shabby). There's an inverse relationship between the cost of a designer handbag and the visibility of its brand mark. The ultimate flex is not needing to flex at all.

Toby Daniels

Henrik Werdelin

Perfection and Intimacy

In a world where anyone can present as perfect, imperfection becomes the new premium—but it can't be just any imperfection. It must be curated imperfection, the kind that signals authenticity without looking careless. A perfectly unpolished selfie. It's an AI-written post with just enough human awkwardness (or Danish spelling mistakes) left in. A bio that feels refreshingly unoptimized.

Our quest for perfection isn't new. Our drive for self-improvement and presenting our best selves is highly adaptive. We know intuitively that polishing our presentation can open doors and create opportunities. There's an evolutionary logic to this impulse; after all, we want to be attractive to those we wish to attract.

But we also know, bone-deep, that being truly seen and accepted for who we are—messy, imperfect, human—is what allows us to form genuine connections. Vulnerability creates intimacy. The things we try hardest to hide—our struggles, fears, and insecurities—are precisely what connect us to others. When someone trusts us with their vulnerabilities, we feel chosen, and it's only when we share our own that we feel truly known.

AI brings this ancient tension between wanting to impress and connect into sharp relief. When we can present a perfectly polished version of ourselves, we're forced to ask: What are we optimizing for? Do we want to be admired or understood? How do these choices shape who we become?

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

But we also know, bone-deep, that being truly seen and accepted for who we are—messy, imperfect, human—is what allows us to form genuine connections.

I've built an AI system to track what I eat and give me feedback. I've tried other calorie-tracking mechanisms but found I tended not to report what I wasn't proud of. That doesn't happen with this one because the AI doesn't judge if I overindulge. On the other hand, it doesn't care. At all. So I still send meal photos to my human personal trainer. In the "attention economy," AI can replicate the mechanics of attention, but not the meaning of it.

This dynamic plays out across our digital landscape. LinkedIn is likely full of posts written by ChatGPT, which get posted unread and then copy-pasted unread into ChatGPT, which produces a thoughtful comment that gets posted unread. Yet people still avidly read the AI-written comments on their AI-written posts. Why? Perhaps because even artificial attention scratches a very real itch for recognition.

Perhaps the interesting question isn't whether AI will increase isolation or intimacy, but how it will transform our understanding of connection itself. Just as social media didn't replace friendship but changed how we think about it, AI may redefine how we express our need to know and be known. The truly interesting developments will come when we stop using AI merely to make ourselves look good and start discovering what new forms of connection become possible because of it. So what about you? What will you choose to keep imperfect, and where will you autotune yourself?

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

Don't

Listen to Us,

Listen to

Them

Announcing our first round of summit speakers

Learn more

Editor’s note: After 18 months and hundreds of events, conversations, and activations, we curated a group of executives who have a real story to share about practical AI transformation. These are the people who are solving small for the enterprise.

No one comes to an ON_Discourse event to hear me or anyone on my team speak. We are provocateurs, not pontificators. We leave that to the real experts - the executives who are building, leading, doing hard things that will move markets. This type of expert is hard to find in the AI era.

These experts are driving meaningful AI adoption at the enterprise level. To do this, they abandoned epic and hyperbolic AI theories in favor of practical, immediate investments that improve business outcomes today. We call that solving small, and we organized a summit around their stories.

And on March 26, we are going to provoke them into telling their story in a new way. I am excited to share our first round of confirmed speakers for the Solve Small ON_Discourse Summit on AI Transformation.

Setting the stage with the Solve Small mindset—why small, tactical shifts lead to big impact.

Dan Gardner

Co-Founder and Executive Chairman, Code and Theory, ON_Discourse

The rise of Agentic Managers and what they mean for the future of management, leadership, and productivity.

Katherine von Jan

Founder, CEO, Tough Day

How a global marketing team is integrating AI, not just experimenting with it.

Don McGuire

CMO, Qualcomm Incorporated

The AI guardrails every business needs—and why the entire C-Suite needs to get on-board.

Mark Howard

President, COO, TIME

In the coming days and weeks we will announce more provocateurs, agitators, builders and makers who are driving enterprise level transformation from the bottom up.

What to expect at The ON_Discourse Summit

This is not another AI conference filled with high-level platitudes. ON_Discourse is designed for those leading AI transformation inside the enterprise—across functions, across teams, and across the C-suite.

  • Sharp, provocation-driven keynotes that move beyond theory and into action.
  • Small-group discussions designed to generate practical, real-world strategies.
  • A cross-functional approach that goes beyond AI as a tech initiative to AI as a business transformation tool.

Why Solve Small?

Big AI transformation stories dominate the headlines, but the most meaningful change happens at a smaller scale:

  • Small is implementable—today, not next year.
  • Small is iterative—it can fail, adapt, and evolve.
  • Small is tangible—it moves beyond theory into action.
  • Small is powerful—because when compounded, it leads to massive transformation.
Learn more
Learn more

Thank You!

You are all set. Your membership is now active and we will see you at the Summit on March 26.

Spread the Word

Let your clients and your network know about the Summit.

Check Your Email

We will be sending you an official onboarding email which includes details on your membership benefits, event access, and how to connect with the community.

Follow ON_Discourse

Listen to the Podcast and subscribe to the Newsletter.

Download Image Share on Linkedin

Thank You!

You are all set. We will see you at the Summit on March 26.

Spread the Word

Let your clients and your network know about the Summit.

Check Your Email

We will be updating you with more information about the Summit there.

Follow ON_Discourse

Listen to the Podcast and subscribe to the Newsletter.

Download Image Share on Linkedin

You Came,

You Saw,

You
Discoursed

A private thank you note to our CES tour participants

Editor’s Note: Hi. It’s Chmiel. I was your tour guide, along with Toby Daniels. We wanted to give you a sense of the full ON_Discourse experience by sending you a private message full of the things you all said on our tour. As we always say, the provocation is just the start of the discourse. Here is your private (mini) discourse report straight from the floor at CES.

Privacy

This is a private link that we will not promote anywhere. This is just between you and us and each other. If there was someone you saw on tour you want to talk with, let us know and we will try to make a connection for you.

Chatham House

At the beginning of the tour, we told you we were recording our conversations. We then reviewed these recordings to understand how the group responded to our provocations on the tour. Everything you read and hear below are real things said by you all (maybe you recognize your own words?). Like everything else we do, we will keep it anonymous and remove any references to companies.

Public Report

Did you see the report we published with Stagwell? Many of the public takeaways are available there. You might see some of your anonymous quotes there as well.

Listen Up

Speaking of recordings, Toby and I recorded something special for you. It will take you back to the floor and let you hear how we play with Chatham House rules with our friends at Wondercraft, a generative audio platform. Some of our favorite lines have been given alternative voices.

0:00 / 0:00

The recap is organized around the primary stops of our tour. We parsed through 12 hours of recordings to select a few of our favorite responses. If anything resonates with you and you feel compelled to draft a public post on this, let me know and we can collaborate on a piece.

Here’s what we “overheard” on the tour…

Matt Chmiel

Matt Chmiel

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

Metaverse

Every tour had at least two parents with kids who spend real money every month on Roblox.

Claire’s built a metaverse experience in Roblox. So Claire’s is marketing to 13-15-year-olds, and a lot of their traffic was declining from malls, but by establishing Roblox as a place where kids could buy things, they were able to replace a lot of that lost revenue.

The metaverse is definitely not a gimmick.

Small Language Models

I don’t care what you say, this was the coolest exhibition on the whole floor.

I work for a steel company, and we move millions of tons of steel every day, and we're really good at making steel, but that's about like, that's where our technology ends, how we quantify, like the fish counting, like the movement of the product, so that then that can be seen by customers like we just haven't found a way to do that. That seems like the aha moment for me.

We spent too much time at the SLM booth.

What is a SLM again?

Latency is the buffering of AI — people want solutions now, not after a trip to the cloud.

This is a game-changer.

Segway and the Robot Bartender

We pushed and pushed and pushed people to have big ideas here and it didn’t work. You all took pictures of the bartender.

Does the bartender ever actually mix a drink?

I like to mow my lawn.

Glimpse Data & the Robot Barista

A few of our tours opted out of the Glimpse AI market data booth. If you did not see it, here is a brief overview: Glimpse AI uses first-party data to generate deeper and more interactive market research. The robot barista was not about the robot or the coffee, it was from a data company that sells training data to power hardware. In this case, they trained a model to understand coffee brewing so that a robot could do it.

What kind of data can they pull?

We have a ton of data on advertising that is all usable, so you understand a lot more about where advertising is showing up, and then also how it actually works.

We shouldn’t call it synthetic data. Synthetic sounds fake. We should call it proxy data.

The Flying Car

There was obviously the Italian cyber-coupe, and the off-grid RV, but the real star of the show is the flying car. I hope all of your IG followers liked and commented on your pics of it.

Is that really a flying car? Is that just a car that has a manned flight?

Actually, it's a plane that goes in the back of your car.

LG & Affectionate Intelligence

We wanted you to see this as invisible, emotional agentic tech. You all started skeptically but we could detect some converts to emotional AI and transparent screens.

Every window is a screen? This sounds like a black mirror episode.

I'm sort of terrified about the idea of a screen looking at and interpreting my mood and then adjusting and personalizing accordingly because I have a resting annoyed face, so who knows what kind of experiences that's going to provide.

Is this affectionate intelligence or just marketing jargon?

The real test for AI isn’t intelligence but invisibility.

About four weeks ago, there was a company called Home Assistant, which is like home automation technology, and they released their first product for $50 which is almost an Alexa-like product for your home but is completely offline. I feel that's also part of something where I don't want to constantly have Google and Amazon taking my data, but I love the functionality these folks offer. Is there going to be a big shift away from companies like these to completely disconnected products where you can interact with them without having to have your data?

XReal AR Glasses and Spatial Computing

Our primary stop was for XReal glasses, but our conversation along the way was about the future of spatial computing.

Are we really getting rid of screens?

The Apple Vision Pro? I thought it was amazing. But, I mean, there has to be an ecosystem of experiences built out for it. Also, it's still too heavy.

[About the Vision Pro] I was standing on a floor that disintegrated and I… really felt like I was falling.

You know what I love? I love skiing. I ski all the time. I don't want to ski on a virtual reality headset.

TCL AiME

My favorite line of the tour comes from the promotional video of AiME, the AI companion from TCL, “Ai Me loves. Human Loves.” Judging by the look of alarm on most of your faces, you were not impressed. It stimulated a very healthy debate.

That was weirder than LG.

I always think it's fascinating when you pass all these robots and you look at the eyes of every one of them. It's like, so much time and craft and attention was put into the eyes because they're all trying to create this emotional connection, and that's the foundation of it. It's like, if you're going to look at this and you look at it in the eyes, what does that feel like?

Is something like this going to enable humans to become more emotionally intelligent?

[In response] I hope so.

[In response] I actually really disagree. I think the more technology, the less emotionally intelligent people become.

Samsung

The differences between LG and Samsung were quite stark, even though both brands were emphasizing agentic experiences from their connected devices.

I think it’s smarter for Samsung to focus on security like this.

Samsung is focusing in on the connected ecosystem overall, and how it makes your life easier, but also addressing the concerns that people have with AI and with everything connected is, is my data private, is my data secure, and will it be hacked? I think Knox takes really good advantage of that concern by making sure consumers understand that this data is going to be private.

I think that storyline is really important, but also the fact that Samsung is going beyond homes and the everywhere piece with automotive, with ships.

Sony

The final point of the tour was really a refresher on the long-term value of spatial content. Sony was unveiling XYN, a new spatial ecosystem of products that you can look up. By this point of the journey, you were all physically taxed and ready to debate. We will end with a few of the most provocative questions we heard at the end of the tour, as well as a closing note from an unnamed legend in the world of advertising who had an anecdote about Sony to share.

Agentic AI portends the end of the app world.

We are not going to see, at a significant scale, more apps being built, designed, and introduced into the existing ecosystems. We're going to start to see new ecosystems emerge, whether or not it's app-based, but fundamentally AI.

The agentic era is going to be even more significant than the app area era. It is about interoperability. It is about these apps and services starting to talk to each other, but on your behalf. So it's the agent that I think is going to replace the app ultimately, and apps will just become services that are embedded into the operating systems on whatever devices that we're using.

As promised… One final note

Many years ago, there was a store in New York called The Wiz, and they were the precursors to Best Buy. They went bust. Interesting store. They were a client. And one of the things that they told us, was that Sony, at the time, was the number one manufacturer of televisions and consumer goods like VCRs, et cetera. And they told us that every day, 80% of the people who came into the store came in wanting to buy a Sony. And if you think about it, in those days people bought a new TV every four or five years. Five years, you're sitting there looking like this, and it says Sony on the screen. But only 30 to 40% of the people actually left with a Sony, because the salesperson would sell them a Toshiba, which was another popular brand at the time, and then Samsung came in. The moral of the story is to support the brand. If you don’t constantly innovate, your brand goes away.

Thank you to our partners at Stagwell for organizing the tours and bringing you along. If you want to follow their activations, go to https://www.stagwellglobal.com/.

Thank you to Wondercraft for helping us record our little message. Check them out if you want help generating audio at scale.

And finally, if you want more of the discourse, let us know by reaching out to Toby or me.

A Year

In Discourse

Getting Comfortable with Uncertainty

Editor’s Note: We provoked our co-founder to get introspective about 2024. Unsurprisingly, he turned it back to the discourse. We think it’s a good reflection of the sentiments we keep hearing in our events.

It’s hard for me to fully take stock of the year we’ve just had. You probably feel the same. For me, 2024 was about being in a constant state of interrogation.

Together with my team and with Matt Chmiel at my side, I’ve hosted summits for Fortune 100 brands, countless group chats, in-person roundtable discussions, podcast conversations and I’ve personally interviewed hundreds of business executives, startup entrepreneurs, technology experts and investors. I’ve listened, transcribed, distilled and synthesized. We’ve published multiple reports and over 100 articles.

During all of this I have attempted to embody the values we established when we first started ON_Discourse: Provoke, Listen, and Change. It’s not always easy. Group think is the antithesis of these values. People are so sure of themselves. They are also mostly wrong. We all are. Especially about the future, and almost certainly when it comes to AI.

What I have learned—and what I am certain of, and what I believe we must carry forward into 2025—is this: the ability to provoke new ways of thinking and adapt to ambiguity is no longer optional. It is the foundation of modern leadership.

Toby Daniels

Toby Daniels

AI: The Mirror We Didn’t Expect

When we asked our members earlier in the year if they would implant a neurochip to eliminate mistakes, the responses revealed far more about humanity than technology. One CEO’s words resonated with me so much: “What if our mistakes are what make us human?"

Throughout 2024, AI forced us to question everything—creativity, empathy, work itself. SaaS companies watched traditional models erode as AI introduced per-seat chaos. Meanwhile, leaders marveled at AI tools that seemed to wield emotional intelligence, leaving us both amazed and unsettled.

One member shared a provocation I can’t shake: “AI can make us more emotionally intelligent—if we allow it.” Yet this year made me less certain than ever. Should we let AI shape our humanity, or must we shape it first?

Spatial Computing: The Future or Another Hype Cycle?

When my cofounder Dan Gardner shared the provocation, “Spatial is not the new smartphone; it’s the next internet,” during a summit we held for a Fortune 100 brand, it sparked such a visceral reaction in people, it was fascinating. During the course of the summit we debated whether spatial computing’s promise was transformative or just pattern-matching old narratives onto new tech. Remember, at the start of the year, we wrote about Vision Pro and by November, Apple had announced it was winding down manufacturing of the device. But Meta also announced Orion, its mixed-reality glasses, which was almost universally well received. In one year we’ve gone from thinking we understood the future, to having serious doubts, to feeling almost certain again. We’re basically wrong, most of the time.

This is the tension we love. The difference between defining and exploring is always palpable. Spatial computing isn’t just a technology; it’s a challenge to how we see and name our future. What if the struggle to define it is the point?

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

The Content Paradox

At the start of the year, I made the claim that content had been commoditized by AI. But something deeper emerged: a yearning not for content, but for connection. In our closed group chats, we noticed a trend toward trusting tastemakers over algorithmic discovery. One of our members admitted that they sometimes just want to turn on a fast channel and watch whatever is playing. Algorithmically driven recommendations and decision fatigue are both real.

“If content is endless, what we seek is not more of it but something we can trust—a human touch amidst the firehose.”

Technology Meets Emotion

This year, technology blurred the line between utility and intimacy. At another one of our enterprise summits that explored AI and the connected home, an attendee shared how empathic AI and digital twins could transform our homes into emotional ecosystems. But these developments also begged harder questions: Should tech meet emotional needs? Or are some things better left untouched?

One leader put it plainly: “Tech has historically failed to serve emotional needs. That is changing.” Whether we are ready for this shift remains uncertain.

2024’s True Gift: Uncertainty

As the year ends, I find myself drawn less to the answers and more to the spaces where questions thrive. ON_Discourse has become a community not of solutions but of shared exploration.

One member described it perfectly: “This is where curiosity meets rigor.” Another offered a simpler truth: “This is where we admit what we don’t know.”

I don’t know what 2025 will bring, but I know this: Wrestling with uncertainty is where we grow. Together, we will keep asking, keep listening, and keep discovering. Because the questions themselves are the point.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.