The Infinite

Zuck

How Mark Zuckerberg Became a Shape-Shifter Across Culture—and What That Tells Us About Power Now

Editor’s Note: While everyone was focused on Zuck's coffee habits and his vision for AI companionship, Toby was focused on his code-switching.

Mark Zuckerberg gave three interviews this week. One to Dwarkesh Patel. One to Theo Von. One to Ben Thompson.

Three hosts. Three audiences. Three different cultures of attention.

And somehow, three different versions of the same man.

With Dwarkesh, Zuck was the architect—carefully explaining the inner workings of LLaMA 3, scaling challenges, the logic of open source, and why infra is destiny. A version of Zuckerberg that speaks to the developer class with surgical calm. Less ambition, more constraint. Less metaverse, more compute.

With Theo, he got weird. Not just funny-weird. Existential-weird. He talked about coffee, jiu-jitsu, whether AI can be your friend, and what it means to feel overwhelmed by the world. For a guy who once wore the same grey shirt for a decade, he seemed surprisingly alive here. Vulnerable, even. A human dad, not a techno-overlord.

With Ben, he went back to strategy mode. Threads. Messaging as the new platform layer. Apple’s walled garden. The arc of Meta from feeds to frictionless business tools. This was Zuckerberg as systems analyst, reflecting not just on what Meta is doing, but on what it failed to do. “We just didn’t prioritize the developer ecosystem,” he says, with the tone of someone who won’t make that mistake again.

Same man. Same week. Entirely different presence.

This isn’t a coincidence. It’s choreography.

Toby Daniels

Toby Daniels

Power Is No Longer Singular

We used to think of tech founders as having a “core identity.” Jobs had design. Bezos had logistics. Zuck had scale.

But identity doesn’t work that way anymore. In a media landscape where your audience is fragmented, your persona has to fragment too. Zuckerberg is showing us what that looks like in real time. He’s not broadcasting one version of himself to everyone. He’s customizing presence for context.

This isn’t about authenticity. It’s about fluency.

Zuck doesn’t need you to like him. He needs you to recognize him—as credible, legible, and aligned with your values, at least for the duration of the interview. He’ll talk parameter tuning with Dwarkesh, moral complexity with Theo, business model compression with Ben. None of it is fake. But all of it is performative.

He is, in this sense, the first post-founder founder. A man who no longer builds for the internet, but performs on top of it.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

So What?

Because this isn’t about Zuckerberg. Not really.

It’s about how power morphs in a world of narrative collapse. When no one voice can dominate, the only ones left standing are those who can slip between voices. Who can code-switch—not just linguistically, but existentially. Zuckerberg isn’t just changing the story. He’s changing who tells it.

The platform once defined the founder. Now the founder becomes the platform.

What’s Next?

He’s done the intellectual web. He’s done Americana surrealism. He’s done strategy’s back room.

So what comes next?

Spirituality? Therapy culture? Gen Z moral philosophy?

Don’t be surprised if you see him on a podcast about neuroplasticity. Or debating Harari on cognition. Or sliding into Twitch streams with creators half his age. Not because he has something to prove—but because he knows that staying still is the surest way to disappear.

That’s the play. Zuckerberg isn’t repositioning the company. He’s reprogramming himself.

He’s testing personas like features. Shipping them like updates. Measuring feedback in trust, not just clicks.

Final Thought

If Musk is trying to be a meme, Zuckerberg is trying to be a mirror.

And maybe that’s the scarier thing.

Because a meme can be ignored. A mirror makes you look back.

And right now, Mark Zuckerberg is reflecting something we might not want to admit: the future belongs to those who can move between worlds without ever claiming one as home.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

What If

Jack Dorsey

Is Right?

Toby Daniels

Toby Daniels

The Provocation

Jack Dorsey recently suggested that all intellectual property laws should be abolished. It sounded absurd at first — another high-profile provocation in an era full of them. But what if it’s worth considering?

In a recent ON_Discourse group chat, a media strategist, two IP attorneys, a founder of an AI venture studio, and other members of our community gathered to confront the uncomfortable possibility: in a world where AI can remix art, code, identity, and likeness infinitely, does intellectual property (IP) still serve the purpose it was designed for? Or has it devolved into a protection racket for legacy power?

"The code of IP law doesn’t map to the code of the internet."

The Icebreaker: What Should Be Liberated?

We began, as we always do, with an icebreaker: What piece of culture deserves to be stolen, remixed, or liberated from its owners?

  • One IP attorney nominated memes: “They’re designed to be shared, but still fall under copyright gravity.”
  • A media strategist called for Superman to enter the public domain now rather than waiting for a slow expiration timeline.
  • The founder of an AI venture studio advocated for software code, the infrastructure upon which future culture is increasingly built.
  • A brand protection attorney argued for liberating technologies subsidized by public investment — such as SpaceX and foundational AI models.

The common thread: the lines between ownership, creation, and collaboration are being obliterated.

Rethinking the Purpose of IP

As the conversation deepened, the tension between old frameworks and new realities became clear.

You can’t abolish IP with a single stroke. It’s a system of many different protections — each serving a different purpose.

IP Law Isn’t One Thing

One legal voice reminded us: abolishing "IP" isn't a coherent position. Copyright, patents, trade secrets, and trademarks exist for distinct reasons. Reform must be nuanced, not reactionary.

The Internet Broke the Old Rules

Our media strategist observed that traditional IP law was designed for physical goods, not the infinite replicability of the internet. In the online world, engagement — not scarcity — drives value.

Ownership Models Are Misaligned

Another participant framed it sharply: today's cultural production demands participation models, not protectionist ones. Yet our legal structures still assume a single author and a static object.

Law as Infrastructure, Not Obstacle

The IP lawyers in the room pushed back on the notion that IP laws are inherently barriers. Every open-source license, every permissive API agreement, every blockchain-based contract — all rely on IP frameworks to exist in the first place.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

The New Creative Dilemma

We explored a tangible example: imagine rogue creators producing hundreds of thousands of Pirates of the Caribbean spinoffs using AI.

At some point, Disney won't be able to send takedown notices fast enough. They'll have to rethink the entire system.

Should Disney issue a hundred thousand takedown notices?
Or should it accept the reality of AI proliferation and build code-based mechanisms — watermarking, blockchain revenue splits — to harness this creative chaos?

The consensus: it will be both. Litigation where necessary. Monetization where possible.

The Platformized Future

Several participants outlined a likely future where large media companies behave more like platforms than studios. Instead of guarding IP fiercely, they would create APIs, encourage derivative works, and share revenue with creators who participate in expanding their universes.

Gatekeeping is a losing strategy. Participation is the new moat.

Imagine an official Marvel API: fans creating their own characters, building micro-stories, and selling digital merchandise — all governed by smart contracts that ensure original creators share in the upside.

Critical Pushbacks

While the conversation was provocative, it was not naïve.

Abolishing IP outright would be like defunding the police — provocative, but terrible policy.

  • Abandoning IP entirely would be reckless. IP is flexible — it needs reinterpretation, not eradication.
  • Engagement ≠ Value. As one strategist reminded, "On the internet, IP has marginal value. But engagement without ownership can quickly become a race to the bottom."
  • Synthetic content will flood the internet, and discerning quality, originality, and human authorship will become even more essential — and difficult.

What Happens Next

We are entering an era where a single piece of creative work could have thousands of contributors, while attribution, compensation, and credibility may be managed by blockchain, not the courts. Companies that open their IP to remixing — and build mechanisms to share value — will dominate those who cling to legacy ownership models.

Consumers will flock to the content they can co-create, not just consume.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

Duolingo

is

Solving

Small

in AI

Transformation

Toby Daniels

Toby Daniels

Editor’s Note: This morning, Duolingo CEO Luis von Ahn sent a company-wide email declaring a major shift: Duolingo is now officially an AI-first company. What follows is Toby’s real-time reaction to this news.

At first glance, this might seem like another predictable headline in a year overflowing with "AI transformations." But having interviewed countless CEOs in the build-up to our AI Transformation Summit, this one deserves closer attention—especially if you're sitting in the C-Suite.

The Solve Small Mindset

At ON_Discourse, we’ve been pushing a counterintuitive idea: AI transformation doesn’t start by rewriting your 5-year plan—it starts by solving small​. The companies that win will be the ones that treat AI like compounding interest: tiny, workflow-level improvements that snowball into massive strategic advantage.

Duolingo isn’t "experimenting" with AI. They are operationalizing it. They are choosing small—automating manual tasks, enabling micro-innovations (like AI-led video tutoring), and re-allocating human talent toward what machines can’t do: building better experiences​.

Whether you agree with their decision, or what impact this might have on whether they will need to hire human instructors in the future, I think this is important for a number of other reasons:

Strategic Parallel

  • CEO Luis von Ahn compares this pivot to Duolingo’s 2012 mobile-first bet—the move that fueled their meteoric rise.
  • They're signaling that AI isn't an add-on; it's foundational.
  • They’re reorganizing workflows, slashing contractor work that AI can replace, and embedding AI literacy across hiring, reviews, and team structure.

Workflow-Level Disruption

  • This is not top-down transformation theater. It’s bottom-up operational rewiring.
  • AI is showing up in the places where friction actually lives: content creation, tutoring, customer experience.

If you're a C-Suite leader still treating AI like a moonshot or a lab experiment, you're already behind. This move by Duolingo is both a warning shot and a blueprint.

AI transformation isn't about spectacle. It's about sweat.

Not about scale first. About solving small first.

Not about replacing people. About freeing them to do meaningful, creative work.

The question isn’t if you will adapt. It’s whether your operational DNA will let you.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

AI Needs

a Feedback

Layer

The next big AI acronym might be: RLHF

Editor’s note: This one caught us by surprise. Sam was in a recent Group Chat dedicated to Gen Z and AI when he suddenly started talking about RLHF. We can’t prove it yet, but Sam might be describing a very important layer to agentic AI that is not yet mainstream.

We are living in an AI gold rush. Instead of shiny rocks, modern-day 49ers are in a desperate search for ‘smart’ features that proactively solve problems and act like a member of the team. And just like a pile of pyrite, these features do not stand up to any kind of scrutiny.

Let me break this down as a scenario.

Sam Broe

Your Company Hires a New Agent

On a Monday morning, an employee logs into Slack and is greeted by a message:

"Hi! I’m your new AI teammate. I can help you write client emails, prep briefs, and summarize meetings. Just @ me and tell me what you need."

They type the first prompt:
“Write a quick note to a client thanking them for last week’s meeting and teeing up next steps.”

The reply comes back in seconds. It's… fine. A little stiff. Some awkward phrasing. Definitely not ready to send.

So the employee edits the message. Softens the tone. Adds a reference that the AI missed. Tweaks the subject line. Then sends it off.

A few hours later, they try again. Another prompt. Another almost-right reply. Another round of edits.

This is the invisible labor of AI integration: micro-corrections. Dozens of small decisions made by humans to fix AI.

Is that supposed to be the future?

Are we expecting every employee, in every company, to become an ad hoc AI whisperer - refining, retrying, adjusting - every time they interact with an agent?

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

Prompting Is a Process, Not a One-Off

Right now, every AI agent is built on the assumption that the prompt is the product. Once it's crafted, it’s deployed. Frozen in time. Maybe a tweak here or there, but mostly untouched, save for the injection of proprietary data.

But real work doesn’t happen like that. Real work requires feedback - constant, messy, iterative feedback.

If you’re a product manager, that feedback becomes feature updates. If you’re a writer, it becomes revisions. If you’re a designer, it becomes pixel-level nudges. But if you’re an AI agent?

There’s nothing. No loop. No memory. No improvement. Just the same prompt, running in place, never learning.

RLHF: The Invisible Architecture of Feedback

This is the blind spot in today’s AI wave. Everyone is obsessed with building smarter models. No one is building smarter systems of feedback — and that’s where the value is hiding.

The thing we learned from decades of digital products — from DTC brands to SaaS platforms — is that conversion is compounding.

  • Small improvements add up.
  • Friction reveals opportunity.
  • Feedback loops outperform static logic.

We need the same logic applied to AI:

  • Not just prompt > output.
  • But prompt > output > human feedback > refinement > next output.

This is what reinforcement learning from human feedback (RLHF) does at the model level. But applied AI — the stuff showing up in your inbox, your tools, your meetings — has no equivalent.

And that’s the problem.

The Business Layer No One Is Building (yet)

We’re missing a feedback layer in the AI product stack. Not full-blown fine-tuning. Not manual editing forever. We’re missing a lightweight, structured way to capture, score, and re-integrate human feedback.

A system that recognizes:

  • Which outputs worked and why
  • Which corrections matter most
  • How to improve prompts dynamically based on real usage

This is active prompt engineering — a term still under the radar, but rising fast. It treats prompts not as static strings of words, but as evolving systems. Systems that get better over time, just like any good product should.

Why This Matters

The AI gold rush is full of dazzling demos and “smart” features. But when those features get dropped into real companies, the cracks show fast.

Every AI agent is just a frozen prompt until someone teaches it how to learn. And right now, no one is teaching. That’s not a technical oversight. It’s a business opportunity.

The companies that figure out how to design, operationalize, and monetize this feedback layer — they won’t just have better agents.They’ll have learning systems. Systems that adapt. Systems that compound value over time. Systems that feel alive.

This virtuous cycle between humans and AIs is how we will co-evolve together, complementing each other rather than competing.

Final Thought

The AI gold rush is a good thing. It alone is sparking a new infusion of energy and creativity in the digital market. But this rush does not need more pickaxes, it needs more gold pans and sluice boxes and a lot more real-time feedback data. I am not just standing by with a critique; I am building these feedback systems as we speak.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

The Internet

is

Being

Rebuilt

You Just Haven't Noticed Yet

Gen Z isn’t coming for the internet. They’re rebuilding it from scratch. And most of us haven’t noticed.

Editor’s Note: Our Network Has Range — And That’s Our Edge.

Yes, our members bring decades of experience. They’ve shaped industries, navigated disruptions, and steered through hype cycles that most people only read about. That depth is our advantage. But even the sharpest minds risk repetition if they only talk to themselves.

So here’s our first deliberate attempt to break pattern. Not because we’re stale—but because staying sharp means letting in noise, disagreement, and the unexpected. This is how we stretch, and how we stay vital.

Every now and then, a conversation hits differently, flips your expectations, and reframes how you think about the future.

That happened during a recent group chat when two teenage AI builders joined a session with a handful of founders, technologists, and operators. They weren’t there to be interviewed. They were there to discuss their relationship to AI.

Here is the answer: they are too busy building to have a relationship with AI. They are already deploying. Already iterating. Already outpacing the roadmap most of us are still trying to draw.

What followed was less of a panel and more of a live feed from the future.

We’ve been talking for months about “who’s building what.” What became clear in this moment was that the most compelling builders might not be in your network yet. They might not be pitching VCs. They might not even be out of high school.

But they are building. Faster than you think.

Toby Daniels

Toby Daniels

They Build Without Permission

The first signal wasn’t what they were working on. It was how.
There was no talk of accelerators or incubators. No LinkedIn-friendly “I’m thrilled to announce…” posts. Just an explanation of how one of them reverse-engineers Upwork job listings to generate MVPs in minutes using AI tooling—and sends a finished product with their pitch before anyone else even replies.

Here’s how they described it:
 “I find people on Upwork who describe what they want. I feed it to an AI coding platform. It builds the project. I record a Loom video showing it working. Then I send it with my proposal.”

What sounded like a hustle was, in fact, a paradigm shift. A redefinition of what it means to be a product builder. Not someone dreaming about solutions—someone shipping them before the request is even accepted.

We often say, “Move fast and break things.” They move faster. And they’re not interested in breaking anything. They’re too busy building what’s next.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

Their Stack Changes Every Day

Ask how they’re learning and the answer isn’t courses or curricula. It’s real-time community—Twitter, forums, Discord, tutorials, builder blogs.

“I follow people who build in public. If they show a tool, I try it. If they use a method, I copy it. Then I break it. Then I rebuild it.”

The developer stack that used to take years to learn is now learned through vibe and repetition. Cursor, Lovable, Vercel, Superbase—these are not platforms they’re discovering. They’re the default environment.

What they lack in credentials, they make up for in cadence. What they lack in polish, they replace with speed.

They don’t care about enterprise readiness. They care about whether it works. Whether it ships. Whether it scales to the next test.

They Think About Trust Very Differently

We asked a basic question: What platforms do you trust for information—Google, Reddit, TikTok, or ChatGPT?

One answered:
“I don’t really trust any of them. They’re all collecting data. You just have to use a few, cross-check everything, and rely on some common sense.”

Then came this line, casually delivered and absolutely unforgettable:
“If there’s anyone you should trust the least, it’s yourself.”

This wasn’t cynicism. It was a working theory of intelligence. A worldview shaped by systems thinking, fast iteration, and feedback loops. They trust outputs only as far as they can validate them—and that includes their own.

This is not a generation raised to believe they’re right. It’s a generation raised to test their assumptions. Repeatedly.

They Are Rewriting the Internet

When asked to imagine what the internet will look like in ten years, one of them didn’t hesitate.

“You won’t need the internet. You’ll just talk to your own AI assistant. Like Jarvis. It will do everything for you.”

Not a prediction. A prototype. They’re already building around this idea—browserless agents, custom assistants, interfaces built on prompts, not clicks.

Where older generations grew up navigating websites, this generation is replacing that cognitive framework entirely. They’re not refining the internet—they’re redesigning it.

And what they imagine feels far less like a user interface and far more like an extension of themselves.

They’re Building Games as Funnels and Writing Code as Culture

One of the more surprising moments came when one shared a recent project: a simple browser game tied to a current event. Fast-paced, made in Cursor, deployed in a day.

What made it interesting wasn’t the gameplay. It was the logic behind it.

“After people play the game, I ask them to enter their email to be added to the leaderboard. That’s how I grow the list for my newsletter.”

It was a loop: event > game > lead capture > distribution > repeat.

It wasn’t a startup. It wasn’t even a product. It was a funnel disguised as fun, and a signal of how deeply embedded systems thinking has become in how they build—even when the projects feel light, fast, or playful.

This Is Already Happening. Now What?

These moments add up to a clear truth: the next version of the internet isn’t being debated on panels or whiteboarded in boardrooms. It’s being built by a generation that doesn’t see the old structures as sacred.

They are building agents, not apps.
They are deploying experiences, not websites.
They are moving through rapid, recursive loops of experimentation and iteration.
And they are doing it without waiting for a job title, an invite, or a budget line.

That’s not something to fear. That’s something to learn from.

If we want to understand where the internet is going, we have to look outside the usual circles. The next generation is already inside the machine, modifying the blueprint, rewriting the rules—and doing it faster than we think.

They aren’t just using AI. They’re shaping it.
They aren’t just consuming the internet. They’re rebuilding it.
And if we’re paying attention, we can meet them there.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

Thank You!

You are all set and your membership is now active.

Check Your Email

Please check your email for onboarding information.

Follow ON_Discourse

Listen to the Podcast and follow us on Linkedin and Instagram.

Update Your Linkedin

Please take a moment to update your Linkedin profile and add your membership to ON_Discourse to either your professional experience or to the section that supports volunteering.

Editor’s Note: One of the most common references we hear in all of our events is due for a take-down. Our co-founder Toby does his best work when he’s fired up. Check it out and let us know if he is leading you into a 'trough of disillusionment' or a 'plateau of provocations.'

The Gartner Hype Cycle has become the go-to cliché in tech. Not only does this lead to boring, useless perspectives, but it also relies on an outdated research methodology that makes no sense.

To the uninitiated, the Gartner Hype Cycle is a 30-year-old research method that tracks tech adoption over time. Despite its title, the hype cycle is an undulating line (not a cycle) that tracks emerging ideas from the so-called “innovation trigger” through the “peak of inflated expectations” beyond the “trough of disillusionment” and up the “slope of enlightenment” until it reaches the “plateau of productivity.”

If this sounds like the narrative arc of a hero’s journey, that is because the hype cycle is based on fiction. You can read about its origins from its creator here. It was designed to help organizations time and calibrate their investment in developing technology accordingly. The key idea: Early adopters should expect a dip in enthusiasm — the so-called trough of disillusionment — requiring patience and capital that will eventually work its way up into the plateau of productivity.

To be clear, I’m not here to take down Gartner. They successfully leveraged an unprovable narrative arc as a brilliant marketing tool for their services. It’s like a technology horoscope that sounds right 65% of the time. It served its purpose, but now we’re dealing with something bigger.

AI transformation is bigger than hype. It is a super trend. If we plot it on a linear curve, we are obscuring much more interesting considerations.

To navigate AI transformation, leaders need a framework that embraces uncertainty and complexity. I’ve spent the past year interviewing dozens of AI leaders and innovation experts. Their thoughts and observations fit into three states of AI transformation: Possible, Potential and Proven.

Here’s how to envision these three states and the one essential provocation leaders should be asking themselves:

Toby Daniels

Toby Daniels

First State:

Things That Are Possible

This is AI’s realm of imagination, where ideas spark but haven’t yet materialized into practical use cases. Theoretical concepts like Artificial General Intelligence (AGI) exist here. Think Hal from 2001. AGI will have human-like cognitive ability.

OpenAI’s focus on AGI reflects the company’s ambition to tackle problems that are decades, if not centuries, away from being solved. And IBM’s neuromorphic computing explorations into brain-inspired chips like TrueNorth aim to mimic human cognition. These chips promise transformative computing capabilities but remain highly experimental. In the First State, research papers and experimental algorithms dominate the landscape, and progress is measured in breakthroughs, not revenue.

If you’re investing here, are you betting on the future or indulging in a fantasy?

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

Second State:

Things That Have Potential

Here, AI technologies are emerging from academic journals and proof-of-concept stages into early-market experimentation. They’ve demonstrated value but haven’t yet reached a point of reliability or scalability. This is where buzzwords often overshadow results, and visionary leaders must decide how to allocate resources to support fragile ideas.

Tesla’s full self-driving vehicles exemplify this potential. They function in controlled scenarios but remain far from delivering consistent, regulatory-approved results.

Stripe fraud detection with AI, which uses AI to detect and mitigate fraud in online payments, also lives in the Second State. While effective, it requires continuous refinement to adapt to new threats and remain reliable at scale.

Are you willing to nurture fragile, early-stage innovations, or are you only here for the immediate return on investment?

Third State:

Things That Are Proven

This is AI’s gold rush — the domain of scaled, reliable technologies delivering measurable value. Companies here have turned AI from a speculative bet into a fundamental driver of their operations and profits.

Amazon’s AI-driven recommendation algorithms are foundational to the company’s success, influencing customer purchases and optimizing logistics.

Siemens’ predictive maintenance systems in manufacturing ensure operational efficiency, reducing downtime and saving billions annually. And UPS’ AI-optimized logistics, dubbed On-Road Integrated Optimization and Navigation or ORION, optimizes delivery routes in real time, saving millions in fuel and time.

Will you leverage these for the present while building for the future? Or will you let comfort become your cage?

Three Challenges AI Leaders Must Accept

Being an AI leader capable of handling the three states of AI transformation isn’t about frameworks, acronyms or decks.It’s about who you are amid ambiguity, pressure and doubt.

The most productive conversations I’ve had over the past year have involved the three components below. If you drive AI transformation in your enterprise, ask yourself:

1. Can you embrace being wrong?

Leading AI means making calls with incomplete information. You will fail. The question isn’t whether you’ll stumble — it’s whether you’ll learn fast enough to stay in the race. When was the last time you admitted a mistake to your team? If you can’t remember, you’re already in trouble.

2. Are you ready to rethink your identity?

You’re not a decision-maker; you’re a decision-shaper. Your role isn’t to control outcomes — it’s to create environments where great outcomes emerge.How often do you let your team’s experiments challenge your instincts?

3. Can you manage fear — yours and theirs?

AI is a pressure cooker. It amplifies anxieties about jobs, ethics and the future. You can’t outsource courage to a playbook. Leadership means stepping into those conversations, not avoiding them.Have you addressed your team’s fears about AI — or have you assumed their silence means support?

This moment requires breaking out of the regimented ways of thinking. Much like music, classical leadership thrives on precision and control. What’s needed is jazz leadership, which thrives on responsiveness and improvisation. AI’s pace means leaders must constantly adapt to new rhythms and riff on emerging opportunities.

Expecting AI to follow the classical path of the Gartner Hype Cycle will only lead you into a cacophony of failure.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

AI doesn't understand a damn thing.

Editor’s Note: A year ago, we talked about the rise of the so-called EQ internet, and now we are excited to share a perspective from a founder who is actually building the damn thing.

AI has a problem: it can sound intelligent, but it doesn’t understand a damn thing, not even with so-called “reasoning." We’ve spent years marveling at the fluency of Large Language Models (LLMs), watching them churn out perfectly structured paragraphs, draft polite emails, and even mimic human warmth. But here’s the reality—LLMs are linguistic illusionists, dressing up statistical predictions as understanding. And that’s where the hype stops.

Toby Daniels

Max Weidemann

The Great AI Deception

People keep asking, "When will LLMs become empathetic?" The answer is simple: never. Not as long as we’re relying on black-box models trained to predict words, not meaning. LLMs don’t feel, they don’t reason (unless you think humans reason by telling themselves to think step-by-step, then meticulously go through every step in their thoughts for 30 seconds, and then form a response), and they certainly don’t care. They spit out responses based on probability, not insight. You can ask them for life advice, and they’ll generate something that sounds right—but without any real understanding of your situation, your motives, or your emotional state.

Let’s be clear: AI-generated empathy isn’t empathy. It’s a script. It’s a formula. And the moment you need real understanding, real nuance, real depth, these systems crumble. That’s because empathy isn’t about mirroring the right words—it’s about knowing why those words matter.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

The Missing Layer: Real Emotional Intelligence

If we want AI to be truly useful in high-stakes environments—hiring, leadership, relationships, mental health—it needs more than linguistic gymnastics. It needs real emotional intelligence (EQ). That doesn’t mean programming in "compassionate" responses or tweaking outputs to sound more human. It means AI must be able to interpret personality, motivation, psychological states, and behavior over time.

LLMs can’t do that. They don’t track human patterns, they don’t learn from long-term interactions, and they certainly don’t recognize why someone might be saying one thing while meaning another. That’s what EQ-driven AI solves. Not by generating better generic responses, but by tailoring interactions to the individual—based on psychology, not word probability.

Why This Matters Everywhere

Without EQ, AI is useless in the places where human understanding actually matters. HR tech powered by LLMs? That’s just glorified keyword matching, completely missing whether a candidate fits a team’s culture or will thrive in a company’s work environment. AI-powered therapy chatbots? They can parrot self-help advice, but they can’t detect when someone is on the verge of burnout or spiraling into a depressive episode. AI in customer service? Sure, it can say "We understand your frustration," but it doesn’t understand anything.

The world doesn’t need more artificially polite chatbots. It needs AI that actually understands people—that can read between the lines, identify underlying motivations, and adapt dynamically. Otherwise, we’re just building fancier parrots that sound good but know nothing.

The Future: AI That Gets You

The next wave of AI isn’t about making LLMs sound more human—it’s about making AI think more human. That means moving beyond black-box predictions and into explainable, psychology-based models that process human emotions, intent, and long-term behavioral patterns. It means AI that doesn’t just summarize data but can tell you why someone is likely to succeed in a role, why a team is struggling, why a customer is about to churn.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

AI

TRANSFORMATION

Solve Small

and

Think Like

a Founder

Don't fall for the big talk

Editor’s note: Our Co-Founder has developed this perspective about AI transformation after hearing countless talks about the so-called AI revolution. We think it’s such a good break from the conventional approach to AI adoption that we organized a summit around it.

Artificial Intelligence is sold to the C-suite as transformation at scale—a revolution in business, a redefinition of the workforce, a paradigm shift. Every AI keynote, whitepaper, and corporate summit emphasizes “epic transformation,” the kind that reshapes industries overnight.

But here’s the truth: AI transformation rarely happens in a single leap. Instead, it evolves through a series of incremental, often messy, small-scale shifts. And it’s in those smaller moves—often overlooked in corporate case studies—where AI’s true impact is being felt. This is the Solve Small approach: focusing on targeted, bottom-up AI interventions that remove inefficiencies while preserving the human touch where it matters most.

This is where AI transformation mirrors the way great founders run their companies. Conventional business wisdom says that scaling an organization requires distributing decision-making, adding layers of management, and diffusing control. Yet, the most effective founder-led companies—like Apple, Airbnb, Shopify, and Nvidia—reject this model. Instead, they remain deeply involved in the details that matter, ensuring that speed, adaptability, and clarity drive their organization forward. AI transformation requires the same approach: high-touch, iterative, and deeply embedded within the business.

Toby Daniels

Toby Daniels

Founder, ON_Discourse, former Chief Innovation Officer, Adweek, Founder and Chair, Social Media Week

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

The Problem With Epic AI

The way we talk about AI inside boardrooms is broken. The discourse is full of sweeping, cinematic narratives—AI will “reinvent how we work,” “unlock human potential,” and “create limitless efficiency.” Yet, this kind of hype obscures the real work required to integrate AI successfully.

Consider the C-suite executive who leaves an AI conference with visions of radical automation, only to return to an organization struggling with basic data hygiene. Or the startup founder promising a fully autonomous AI-powered workflow, only to realize that employees don’t trust AI-generated insights. The gap between expectation and execution is vast because the AI discourse favors spectacle over substance.

This is why AI transformation should be approached like a founder running their company—not through bureaucratic committees and abstract strategies, but through direct involvement, rapid iteration, and a relentless focus on solving small, meaningful problems.

Thinking Like a Founder: AI Transformation in Small, Impactful Steps

The best founder-led companies thrive because they embrace hands-on decision-making and fast, iterative improvements. AI adoption should follow a similar model. Here’s what that looks like in practice:

1.

Solve Small: Incremental Change That Compounds Over Time

Great founders don’t overhaul their entire organization overnight; they make continuous, strategic adjustments. AI transformation should follow the same principle. The most effective AI-driven businesses treat AI like compounding interest—small investments that build on each other:

  • A sales team starts with AI-assisted meeting transcriptions, then layers in automated CRM updates, and later integrates predictive sales forecasting.
  • A manufacturing plant implements AI for maintenance logs, extends it to predictive downtime prevention, and eventually integrates it into supply chain optimization.

Like a founder iterating on product development, AI transformation isn’t about flipping a switch—it’s about stacking small improvements until they create something larger than the sum of their parts.

2.

AI as a Bottom-Up, Ground-Level Initiative

The best ideas don’t always come from leadership—they emerge from people closest to the work. Founder-led organizations like Nvidia empower employees at every level to share insights directly with leadership. AI adoption should work the same way:

  • A call center rep starts using ChatGPT to summarize support tickets before management even considers AI integration.
  • A junior designer leverages AI-generated layouts to speed up work, improving both quality and output.
  • A coder uses AI-assisted debugging not because leadership mandated it, but because it’s simply faster and more efficient.

AI initiatives should mirror the Solve Small model—where leadership listens, learns, and scales what works, rather than imposing AI from the top down.

3.

AI as a Fast, Iterative Process

Founder-led companies don’t rely on long planning cycles. Airbnb’s Brian Chesky eliminated unnecessary layers of management and engaged directly with product teams to make faster, better decisions. AI transformation should follow the same principle:

  • A legal team pilots AI contract review with one clause at a time rather than automating the entire process at once.
  • A retail company A/B tests AI-generated product descriptions for a subset of SKUs before rolling it out across the catalog.
  • A logistics firm implements AI-driven route optimization for a single delivery region before expanding nationwide.

Successful AI adoption moves at the pace of iteration, not perfection.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

4.

AI as Local, Not Just Enterprise-Wide

Not every AI innovation needs massive cloud infrastructure. The most impactful AI-driven improvements happen at the local level—on an individual’s laptop, phone, or department-specific system:

  • A doctor using AI for voice-to-text medical notes on their own device, rather than a hospital-wide AI integration.
  • A journalist using AI summarization locally for research without relying on centralized editorial AI mandates.
  • A salesperson using an AI-powered meeting assistant that operates on their phone, rather than waiting for IT to implement a corporate-wide AI tool.

5.

AI as Specific, Not Broad

Great founders don’t try to do everything at once. They focus on solving one problem exceptionally well before expanding. AI transformation should be approached the same way:

  • AI for one type of document scanning (e.g., invoices) works better than trying to automate all document types at once.
  • AI in one language model per department (e.g., legal vs. marketing) avoids generic, diluted results.
  • AI that refines a single metric (e.g., reducing customer service handle time) often outperforms AI designed to “optimize” an entire workflow.

6.

AI as an Invisible, Seamless Part of Work

Founder-led companies prioritize clarity—teams work best when they know exactly what to focus on. AI should operate the same way: it should be so seamlessly integrated that it disappears into the workflow.

  • AI-powered email filters reduce spam and prioritize important messages.
  • AI-driven search ranking surfaces better results without users thinking about it.
  • AI-enhanced writing suggestions feel like part of the workflow, not a separate tool.

The Future of AI Is Solve Small—And Founder-Driven

Big AI transformation stories will always dominate headlines, but in reality, the organizations that win will be the ones that think like great founders—staying hands-on, moving fast, and solving small, again and again, until the transformation is undeniable.

If business leaders want to “go big” on AI, they should start by solving small—and staying directly involved every step of the way. This is why AI transformation should be approached like a founder running their company—not through bureaucratic committees and abstract strategies, but through direct involvement, rapid iteration, and a relentless focus on solving small, meaningful problems.

Interested in attending the Summit? Learn more and request an invitation here.

Editor’s Note: We loved this perspective from Chris Perry, one of our most literary members. It frames the current technical disruptions with a critical historical context. In addition to writing great essays, Perry literally wrote the book on AI Agents. We can’t recommend it highly enough.

The sledgehammer fell in 1992. I was twenty-two, fresh out of college, and returning to my father's world. His graphic arts studio was a place I'd known growing up but never understood until I worked there with him.

Everything had changed by the time I arrived; we just hadn’t recognized it yet. The device of destruction wasn't physical. It was digital, invisible, and ruthless. The sledgehammer was Photoshop.

Until then, my Dad was a force in town. To understand him was to understand American business before it became “virtual.” He was a social animal with ambition unbound by caution or vision. His handshakes could hurt, his laughter echoed, and his stories stretched but never broke. He ran his studio as he carried himself. It was a hothouse where people who made things happen—and made things—were gods.

For two decades, they were at the center of the advertising world. In Detroit, art met industry more than anywhere else. His crew knew the essential in between—how to make a car shine on the page in ways it never quite did on the showroom floor.

His designers, drawers, typesetters, and camera techs were craft workers. It was fitting that they worked in a creative factory with a particular feel and scent that reflected the times. The chemicals could strip paint. The smell of paper emanated fresh from the cutter. The constant cigarette smoke hung like clouds around the ceiling lights. When the automation came, the air changed.

The hammer dropped, erasing everything we knew, including the feel. It happened one pixel at a time.

Toby Daniels

Chris Perry

Gradually, Then Suddenly

As the saying goes, Photoshop’s impact hit gradually, then suddenly. Steve Jobs introduced the Macintosh in 1984 with his famous Super Bowl commercial. It featured an actual sledgehammer thrown at conformity. Jobs didn't say then that his bicycle for the mind would become a wrecking ball for certain kinds of creative work—my Dad's era of creative work.

The Mac led to new software, most notably Photoshop. The effect wasn't immediate but gained momentum by the early 1990s. Jobs' beige computer boxes started showing up on more desktops in the workplace. Once new software was loaded into them, the creative rhythms changed. The click-click-click of mouse buttons began to drown out the scratch of pencils and the squeak of Pentel marker tips.

My Dad and his band (the present company included) didn't adapt fast or fundamentally enough. Revenue and margins shrank as clients took the work in-house. We doubled down on what we knew, only accelerating our demise. A new technology, and those who knew how to use it, dismantled the business in about 24 months.

We should have seen the hammer coming because it was already there. The lesson: Not reacting to a technological wave until it breaks over you is more than just a business failure—it can be an enduring, personal failure. I promised myself never to be caught so ill-prepared again.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

Magical Automated Workflows

Photoshop turned 35 last week. If you were to imagine a single icon representing a creative transformation, it would be the "Magic Wand." One-click. That's all it took. A click and similar colors were selected as if by sorcery. Same with a click to alter the composition of an image. Ditto for envisioning variations of a scene.

What had once required careful technique and training was now available to anyone with access to the software and the patience to play and learn with it.

The magic wasn't just in what the wand could do but in what it represented. The transition from physical to digital craft offered efficiency.

Ironically, Photoshop—and the efficient workflows it led to—also came from family. Here is a bit of backstory on how it came to be.

Thomas and John Knoll grew up in a house that valued art and technology. They stood at the crossroads of two worlds, uniquely positioned to bridge them.

In 1987, Thomas Knoll wanted to display grayscale images on his Mac's black-and-white monitor. It was a practical problem with what seemed like a limited solution. His brother John, working at Industrial Light & Magic, saw further. He convinced Thomas to expand the program to handle color on the new Macintosh II. What began as a personal project caught the attention of the industry's power players.

A capability previously reserved for mainframes could now run on a PC. Adobe recognized the potential immediately, securing distribution rights in 1988 and releasing Photoshop 1.0 for Macintosh in 1990.

Photoshop didn't transform creative work in isolation. PageMaker, released by Aldus in 1985, opened the door to what would become known as desktop publishing. Photoshop and PageMaker encoded creative techniques and integrated workflows into software that creatives and producers could use directly. Those in the studio world who didn’t adapt to augmentation and changing workflows were permanently displaced.

The Magic Fades Without Imagination

Decades later, a much bigger automation wave is building. Intelligent software is reshaping all creative and knowledge work, echoing what happened in our studio.

Some automation parallels are striking. Both represent shifts from manual to digital creation and spark similar existential questions about the value of work and the identities of those who do it. With desktop publishing, page designers and typesetters rightly feared obsolescence. Today, anyone who produces knowledge, research, or creative work naturally expresses the same concerns about generative AI.

There are also critical differences between automation then and now.

Desktop publishing tools were extensions of human work. They replicated technical aspects but required direct human guidance for every decision. Generative AI tools can generate work with minimal direction, shifting human expertise from execution to curation.

Desktop tools required understanding design principles, but generative AI can produce seemingly decent output without the user knowing the underlying fundamentals.

Perhaps most significantly, the desktop publishing revolution unfolded over a decade while generative AI's capabilities are evolving at an incredible speed.

The importance of judgment, discernment, and taste in delivering commercial-grade work remains unchanged, whether we’re talking about Photoshop thirty years ago or OpenAI’s latest reasoning model today.

Consider Photoshop's meaning as a metaphor for the current moment. It automated specific, repeatable, known tasks and made technical processes faster and more accessible. However, it could not replicate the mystery of creativity or imagination, which no software has yet managed to do.

And therein lies a twist. Looking back, what reads like a family business failure doesn’t tell the whole story.

After experiencing the destruction of our business, my path reflects possibilities as a new technology destroys and creates simultaneously.

Alongside highly inventive colleagues and clients, I’ve helped create and grow new businesses built on mobile computing, e-commerce, weblogs, social networks, digital content, app development, community management, digital video, and social intelligence.

We capitalized on these tech breakthroughs not merely by understanding their specifications or original use cases but by seeing what they could lead to—by tapping into our creative capacity to imagine and bring new possibilities to life. AI is the next frontier on which to build.

Yes, AIs can encode what has been and suggest probabilities for the future. They can analyze patterns from the past with astonishing accuracy. But they cannot predict how we'll ride the next wave.

The hammer will fall, but what emerges isn’t simply destruction or dead ends. There can be a lot of light in and at the end of a transformation tunnel.

Neither my Dad nor I fully understood it in a moment of failure. It’s a reminder—then and now—that it’s hard to read the label of the jar you’re in. You have to see it from the outside.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.