The Internet

is

Being

Rebuilt

You Just Haven't Noticed Yet

Gen Z isn’t coming for the internet. They’re rebuilding it from scratch. And most of us haven’t noticed.

Editor’s Note: Our Network Has Range — And That’s Our Edge.

Yes, our members bring decades of experience. They’ve shaped industries, navigated disruptions, and steered through hype cycles that most people only read about. That depth is our advantage. But even the sharpest minds risk repetition if they only talk to themselves.

So here’s our first deliberate attempt to break pattern. Not because we’re stale—but because staying sharp means letting in noise, disagreement, and the unexpected. This is how we stretch, and how we stay vital.

Every now and then, a conversation hits differently, flips your expectations, and reframes how you think about the future.

That happened during a recent group chat when two teenage AI builders joined a session with a handful of founders, technologists, and operators. They weren’t there to be interviewed. They were there to discuss their relationship to AI.

Here is the answer: they are too busy building to have a relationship with AI. They are already deploying. Already iterating. Already outpacing the roadmap most of us are still trying to draw.

What followed was less of a panel and more of a live feed from the future.

We’ve been talking for months about “who’s building what.” What became clear in this moment was that the most compelling builders might not be in your network yet. They might not be pitching VCs. They might not even be out of high school.

But they are building. Faster than you think.

Toby Daniels

Toby Daniels

They Build Without Permission

The first signal wasn’t what they were working on. It was how.
There was no talk of accelerators or incubators. No LinkedIn-friendly “I’m thrilled to announce…” posts. Just an explanation of how one of them reverse-engineers Upwork job listings to generate MVPs in minutes using AI tooling—and sends a finished product with their pitch before anyone else even replies.

Here’s how they described it:
 “I find people on Upwork who describe what they want. I feed it to an AI coding platform. It builds the project. I record a Loom video showing it working. Then I send it with my proposal.”

What sounded like a hustle was, in fact, a paradigm shift. A redefinition of what it means to be a product builder. Not someone dreaming about solutions—someone shipping them before the request is even accepted.

We often say, “Move fast and break things.” They move faster. And they’re not interested in breaking anything. They’re too busy building what’s next.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

Their Stack Changes Every Day

Ask how they’re learning and the answer isn’t courses or curricula. It’s real-time community—Twitter, forums, Discord, tutorials, builder blogs.

“I follow people who build in public. If they show a tool, I try it. If they use a method, I copy it. Then I break it. Then I rebuild it.”

The developer stack that used to take years to learn is now learned through vibe and repetition. Cursor, Lovable, Vercel, Superbase—these are not platforms they’re discovering. They’re the default environment.

What they lack in credentials, they make up for in cadence. What they lack in polish, they replace with speed.

They don’t care about enterprise readiness. They care about whether it works. Whether it ships. Whether it scales to the next test.

They Think About Trust Very Differently

We asked a basic question: What platforms do you trust for information—Google, Reddit, TikTok, or ChatGPT?

One answered:
“I don’t really trust any of them. They’re all collecting data. You just have to use a few, cross-check everything, and rely on some common sense.”

Then came this line, casually delivered and absolutely unforgettable:
“If there’s anyone you should trust the least, it’s yourself.”

This wasn’t cynicism. It was a working theory of intelligence. A worldview shaped by systems thinking, fast iteration, and feedback loops. They trust outputs only as far as they can validate them—and that includes their own.

This is not a generation raised to believe they’re right. It’s a generation raised to test their assumptions. Repeatedly.

They Are Rewriting the Internet

When asked to imagine what the internet will look like in ten years, one of them didn’t hesitate.

“You won’t need the internet. You’ll just talk to your own AI assistant. Like Jarvis. It will do everything for you.”

Not a prediction. A prototype. They’re already building around this idea—browserless agents, custom assistants, interfaces built on prompts, not clicks.

Where older generations grew up navigating websites, this generation is replacing that cognitive framework entirely. They’re not refining the internet—they’re redesigning it.

And what they imagine feels far less like a user interface and far more like an extension of themselves.

They’re Building Games as Funnels and Writing Code as Culture

One of the more surprising moments came when one shared a recent project: a simple browser game tied to a current event. Fast-paced, made in Cursor, deployed in a day.

What made it interesting wasn’t the gameplay. It was the logic behind it.

“After people play the game, I ask them to enter their email to be added to the leaderboard. That’s how I grow the list for my newsletter.”

It was a loop: event > game > lead capture > distribution > repeat.

It wasn’t a startup. It wasn’t even a product. It was a funnel disguised as fun, and a signal of how deeply embedded systems thinking has become in how they build—even when the projects feel light, fast, or playful.

This Is Already Happening. Now What?

These moments add up to a clear truth: the next version of the internet isn’t being debated on panels or whiteboarded in boardrooms. It’s being built by a generation that doesn’t see the old structures as sacred.

They are building agents, not apps.
They are deploying experiences, not websites.
They are moving through rapid, recursive loops of experimentation and iteration.
And they are doing it without waiting for a job title, an invite, or a budget line.

That’s not something to fear. That’s something to learn from.

If we want to understand where the internet is going, we have to look outside the usual circles. The next generation is already inside the machine, modifying the blueprint, rewriting the rules—and doing it faster than we think.

They aren’t just using AI. They’re shaping it.
They aren’t just consuming the internet. They’re rebuilding it.
And if we’re paying attention, we can meet them there.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

Thank You!

You are all set and your membership is now active.

Check Your Email

Please check your email for onboarding information.

Follow ON_Discourse

Listen to the Podcast and follow us on Linkedin and Instagram.

Update Your Linkedin

Please take a moment to update your Linkedin profile and add your membership to ON_Discourse to either your professional experience or to the section that supports volunteering.

Editor’s Note: One of the most common references we hear in all of our events is due for a take-down. Our co-founder Toby does his best work when he’s fired up. Check it out and let us know if he is leading you into a 'trough of disillusionment' or a 'plateau of provocations.'

The Gartner Hype Cycle has become the go-to cliché in tech. Not only does this lead to boring, useless perspectives, but it also relies on an outdated research methodology that makes no sense.

To the uninitiated, the Gartner Hype Cycle is a 30-year-old research method that tracks tech adoption over time. Despite its title, the hype cycle is an undulating line (not a cycle) that tracks emerging ideas from the so-called “innovation trigger” through the “peak of inflated expectations” beyond the “trough of disillusionment” and up the “slope of enlightenment” until it reaches the “plateau of productivity.”

If this sounds like the narrative arc of a hero’s journey, that is because the hype cycle is based on fiction. You can read about its origins from its creator here. It was designed to help organizations time and calibrate their investment in developing technology accordingly. The key idea: Early adopters should expect a dip in enthusiasm — the so-called trough of disillusionment — requiring patience and capital that will eventually work its way up into the plateau of productivity.

To be clear, I’m not here to take down Gartner. They successfully leveraged an unprovable narrative arc as a brilliant marketing tool for their services. It’s like a technology horoscope that sounds right 65% of the time. It served its purpose, but now we’re dealing with something bigger.

AI transformation is bigger than hype. It is a super trend. If we plot it on a linear curve, we are obscuring much more interesting considerations.

To navigate AI transformation, leaders need a framework that embraces uncertainty and complexity. I’ve spent the past year interviewing dozens of AI leaders and innovation experts. Their thoughts and observations fit into three states of AI transformation: Possible, Potential and Proven.

Here’s how to envision these three states and the one essential provocation leaders should be asking themselves:

Toby Daniels

Toby Daniels

First State:

Things That Are Possible

This is AI’s realm of imagination, where ideas spark but haven’t yet materialized into practical use cases. Theoretical concepts like Artificial General Intelligence (AGI) exist here. Think Hal from 2001. AGI will have human-like cognitive ability.

OpenAI’s focus on AGI reflects the company’s ambition to tackle problems that are decades, if not centuries, away from being solved. And IBM’s neuromorphic computing explorations into brain-inspired chips like TrueNorth aim to mimic human cognition. These chips promise transformative computing capabilities but remain highly experimental. In the First State, research papers and experimental algorithms dominate the landscape, and progress is measured in breakthroughs, not revenue.

If you’re investing here, are you betting on the future or indulging in a fantasy?

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

Second State:

Things That Have Potential

Here, AI technologies are emerging from academic journals and proof-of-concept stages into early-market experimentation. They’ve demonstrated value but haven’t yet reached a point of reliability or scalability. This is where buzzwords often overshadow results, and visionary leaders must decide how to allocate resources to support fragile ideas.

Tesla’s full self-driving vehicles exemplify this potential. They function in controlled scenarios but remain far from delivering consistent, regulatory-approved results.

Stripe fraud detection with AI, which uses AI to detect and mitigate fraud in online payments, also lives in the Second State. While effective, it requires continuous refinement to adapt to new threats and remain reliable at scale.

Are you willing to nurture fragile, early-stage innovations, or are you only here for the immediate return on investment?

Third State:

Things That Are Proven

This is AI’s gold rush — the domain of scaled, reliable technologies delivering measurable value. Companies here have turned AI from a speculative bet into a fundamental driver of their operations and profits.

Amazon’s AI-driven recommendation algorithms are foundational to the company’s success, influencing customer purchases and optimizing logistics.

Siemens’ predictive maintenance systems in manufacturing ensure operational efficiency, reducing downtime and saving billions annually. And UPS’ AI-optimized logistics, dubbed On-Road Integrated Optimization and Navigation or ORION, optimizes delivery routes in real time, saving millions in fuel and time.

Will you leverage these for the present while building for the future? Or will you let comfort become your cage?

Three Challenges AI Leaders Must Accept

Being an AI leader capable of handling the three states of AI transformation isn’t about frameworks, acronyms or decks.It’s about who you are amid ambiguity, pressure and doubt.

The most productive conversations I’ve had over the past year have involved the three components below. If you drive AI transformation in your enterprise, ask yourself:

1. Can you embrace being wrong?

Leading AI means making calls with incomplete information. You will fail. The question isn’t whether you’ll stumble — it’s whether you’ll learn fast enough to stay in the race. When was the last time you admitted a mistake to your team? If you can’t remember, you’re already in trouble.

2. Are you ready to rethink your identity?

You’re not a decision-maker; you’re a decision-shaper. Your role isn’t to control outcomes — it’s to create environments where great outcomes emerge.How often do you let your team’s experiments challenge your instincts?

3. Can you manage fear — yours and theirs?

AI is a pressure cooker. It amplifies anxieties about jobs, ethics and the future. You can’t outsource courage to a playbook. Leadership means stepping into those conversations, not avoiding them.Have you addressed your team’s fears about AI — or have you assumed their silence means support?

This moment requires breaking out of the regimented ways of thinking. Much like music, classical leadership thrives on precision and control. What’s needed is jazz leadership, which thrives on responsiveness and improvisation. AI’s pace means leaders must constantly adapt to new rhythms and riff on emerging opportunities.

Expecting AI to follow the classical path of the Gartner Hype Cycle will only lead you into a cacophony of failure.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

AI doesn't understand a damn thing.

Editor’s Note: A year ago, we talked about the rise of the so-called EQ internet, and now we are excited to share a perspective from a founder who is actually building the damn thing.

AI has a problem: it can sound intelligent, but it doesn’t understand a damn thing, not even with so-called “reasoning." We’ve spent years marveling at the fluency of Large Language Models (LLMs), watching them churn out perfectly structured paragraphs, draft polite emails, and even mimic human warmth. But here’s the reality—LLMs are linguistic illusionists, dressing up statistical predictions as understanding. And that’s where the hype stops.

Toby Daniels

Max Weidemann

The Great AI Deception

People keep asking, "When will LLMs become empathetic?" The answer is simple: never. Not as long as we’re relying on black-box models trained to predict words, not meaning. LLMs don’t feel, they don’t reason (unless you think humans reason by telling themselves to think step-by-step, then meticulously go through every step in their thoughts for 30 seconds, and then form a response), and they certainly don’t care. They spit out responses based on probability, not insight. You can ask them for life advice, and they’ll generate something that sounds right—but without any real understanding of your situation, your motives, or your emotional state.

Let’s be clear: AI-generated empathy isn’t empathy. It’s a script. It’s a formula. And the moment you need real understanding, real nuance, real depth, these systems crumble. That’s because empathy isn’t about mirroring the right words—it’s about knowing why those words matter.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

The Missing Layer: Real Emotional Intelligence

If we want AI to be truly useful in high-stakes environments—hiring, leadership, relationships, mental health—it needs more than linguistic gymnastics. It needs real emotional intelligence (EQ). That doesn’t mean programming in "compassionate" responses or tweaking outputs to sound more human. It means AI must be able to interpret personality, motivation, psychological states, and behavior over time.

LLMs can’t do that. They don’t track human patterns, they don’t learn from long-term interactions, and they certainly don’t recognize why someone might be saying one thing while meaning another. That’s what EQ-driven AI solves. Not by generating better generic responses, but by tailoring interactions to the individual—based on psychology, not word probability.

Why This Matters Everywhere

Without EQ, AI is useless in the places where human understanding actually matters. HR tech powered by LLMs? That’s just glorified keyword matching, completely missing whether a candidate fits a team’s culture or will thrive in a company’s work environment. AI-powered therapy chatbots? They can parrot self-help advice, but they can’t detect when someone is on the verge of burnout or spiraling into a depressive episode. AI in customer service? Sure, it can say "We understand your frustration," but it doesn’t understand anything.

The world doesn’t need more artificially polite chatbots. It needs AI that actually understands people—that can read between the lines, identify underlying motivations, and adapt dynamically. Otherwise, we’re just building fancier parrots that sound good but know nothing.

The Future: AI That Gets You

The next wave of AI isn’t about making LLMs sound more human—it’s about making AI think more human. That means moving beyond black-box predictions and into explainable, psychology-based models that process human emotions, intent, and long-term behavioral patterns. It means AI that doesn’t just summarize data but can tell you why someone is likely to succeed in a role, why a team is struggling, why a customer is about to churn.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

AI

TRANSFORMATION

Solve Small

and

Think Like

a Founder

Don't fall for the big talk

Editor’s note: Our Co-Founder has developed this perspective about AI transformation after hearing countless talks about the so-called AI revolution. We think it’s such a good break from the conventional approach to AI adoption that we organized a summit around it.

Artificial Intelligence is sold to the C-suite as transformation at scale—a revolution in business, a redefinition of the workforce, a paradigm shift. Every AI keynote, whitepaper, and corporate summit emphasizes “epic transformation,” the kind that reshapes industries overnight.

But here’s the truth: AI transformation rarely happens in a single leap. Instead, it evolves through a series of incremental, often messy, small-scale shifts. And it’s in those smaller moves—often overlooked in corporate case studies—where AI’s true impact is being felt. This is the Solve Small approach: focusing on targeted, bottom-up AI interventions that remove inefficiencies while preserving the human touch where it matters most.

This is where AI transformation mirrors the way great founders run their companies. Conventional business wisdom says that scaling an organization requires distributing decision-making, adding layers of management, and diffusing control. Yet, the most effective founder-led companies—like Apple, Airbnb, Shopify, and Nvidia—reject this model. Instead, they remain deeply involved in the details that matter, ensuring that speed, adaptability, and clarity drive their organization forward. AI transformation requires the same approach: high-touch, iterative, and deeply embedded within the business.

Toby Daniels

Toby Daniels

Founder, ON_Discourse, former Chief Innovation Officer, Adweek, Founder and Chair, Social Media Week

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

The Problem With Epic AI

The way we talk about AI inside boardrooms is broken. The discourse is full of sweeping, cinematic narratives—AI will “reinvent how we work,” “unlock human potential,” and “create limitless efficiency.” Yet, this kind of hype obscures the real work required to integrate AI successfully.

Consider the C-suite executive who leaves an AI conference with visions of radical automation, only to return to an organization struggling with basic data hygiene. Or the startup founder promising a fully autonomous AI-powered workflow, only to realize that employees don’t trust AI-generated insights. The gap between expectation and execution is vast because the AI discourse favors spectacle over substance.

This is why AI transformation should be approached like a founder running their company—not through bureaucratic committees and abstract strategies, but through direct involvement, rapid iteration, and a relentless focus on solving small, meaningful problems.

Thinking Like a Founder: AI Transformation in Small, Impactful Steps

The best founder-led companies thrive because they embrace hands-on decision-making and fast, iterative improvements. AI adoption should follow a similar model. Here’s what that looks like in practice:

1.

Solve Small: Incremental Change That Compounds Over Time

Great founders don’t overhaul their entire organization overnight; they make continuous, strategic adjustments. AI transformation should follow the same principle. The most effective AI-driven businesses treat AI like compounding interest—small investments that build on each other:

  • A sales team starts with AI-assisted meeting transcriptions, then layers in automated CRM updates, and later integrates predictive sales forecasting.
  • A manufacturing plant implements AI for maintenance logs, extends it to predictive downtime prevention, and eventually integrates it into supply chain optimization.

Like a founder iterating on product development, AI transformation isn’t about flipping a switch—it’s about stacking small improvements until they create something larger than the sum of their parts.

2.

AI as a Bottom-Up, Ground-Level Initiative

The best ideas don’t always come from leadership—they emerge from people closest to the work. Founder-led organizations like Nvidia empower employees at every level to share insights directly with leadership. AI adoption should work the same way:

  • A call center rep starts using ChatGPT to summarize support tickets before management even considers AI integration.
  • A junior designer leverages AI-generated layouts to speed up work, improving both quality and output.
  • A coder uses AI-assisted debugging not because leadership mandated it, but because it’s simply faster and more efficient.

AI initiatives should mirror the Solve Small model—where leadership listens, learns, and scales what works, rather than imposing AI from the top down.

3.

AI as a Fast, Iterative Process

Founder-led companies don’t rely on long planning cycles. Airbnb’s Brian Chesky eliminated unnecessary layers of management and engaged directly with product teams to make faster, better decisions. AI transformation should follow the same principle:

  • A legal team pilots AI contract review with one clause at a time rather than automating the entire process at once.
  • A retail company A/B tests AI-generated product descriptions for a subset of SKUs before rolling it out across the catalog.
  • A logistics firm implements AI-driven route optimization for a single delivery region before expanding nationwide.

Successful AI adoption moves at the pace of iteration, not perfection.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

4.

AI as Local, Not Just Enterprise-Wide

Not every AI innovation needs massive cloud infrastructure. The most impactful AI-driven improvements happen at the local level—on an individual’s laptop, phone, or department-specific system:

  • A doctor using AI for voice-to-text medical notes on their own device, rather than a hospital-wide AI integration.
  • A journalist using AI summarization locally for research without relying on centralized editorial AI mandates.
  • A salesperson using an AI-powered meeting assistant that operates on their phone, rather than waiting for IT to implement a corporate-wide AI tool.

5.

AI as Specific, Not Broad

Great founders don’t try to do everything at once. They focus on solving one problem exceptionally well before expanding. AI transformation should be approached the same way:

  • AI for one type of document scanning (e.g., invoices) works better than trying to automate all document types at once.
  • AI in one language model per department (e.g., legal vs. marketing) avoids generic, diluted results.
  • AI that refines a single metric (e.g., reducing customer service handle time) often outperforms AI designed to “optimize” an entire workflow.

6.

AI as an Invisible, Seamless Part of Work

Founder-led companies prioritize clarity—teams work best when they know exactly what to focus on. AI should operate the same way: it should be so seamlessly integrated that it disappears into the workflow.

  • AI-powered email filters reduce spam and prioritize important messages.
  • AI-driven search ranking surfaces better results without users thinking about it.
  • AI-enhanced writing suggestions feel like part of the workflow, not a separate tool.

The Future of AI Is Solve Small—And Founder-Driven

Big AI transformation stories will always dominate headlines, but in reality, the organizations that win will be the ones that think like great founders—staying hands-on, moving fast, and solving small, again and again, until the transformation is undeniable.

If business leaders want to “go big” on AI, they should start by solving small—and staying directly involved every step of the way. This is why AI transformation should be approached like a founder running their company—not through bureaucratic committees and abstract strategies, but through direct involvement, rapid iteration, and a relentless focus on solving small, meaningful problems.

Interested in attending the Summit? Learn more and request an invitation here.

Editor’s Note: We loved this perspective from Chris Perry, one of our most literary members. It frames the current technical disruptions with a critical historical context. In addition to writing great essays, Perry literally wrote the book on AI Agents. We can’t recommend it highly enough.

The sledgehammer fell in 1992. I was twenty-two, fresh out of college, and returning to my father's world. His graphic arts studio was a place I'd known growing up but never understood until I worked there with him.

Everything had changed by the time I arrived; we just hadn’t recognized it yet. The device of destruction wasn't physical. It was digital, invisible, and ruthless. The sledgehammer was Photoshop.

Until then, my Dad was a force in town. To understand him was to understand American business before it became “virtual.” He was a social animal with ambition unbound by caution or vision. His handshakes could hurt, his laughter echoed, and his stories stretched but never broke. He ran his studio as he carried himself. It was a hothouse where people who made things happen—and made things—were gods.

For two decades, they were at the center of the advertising world. In Detroit, art met industry more than anywhere else. His crew knew the essential in between—how to make a car shine on the page in ways it never quite did on the showroom floor.

His designers, drawers, typesetters, and camera techs were craft workers. It was fitting that they worked in a creative factory with a particular feel and scent that reflected the times. The chemicals could strip paint. The smell of paper emanated fresh from the cutter. The constant cigarette smoke hung like clouds around the ceiling lights. When the automation came, the air changed.

The hammer dropped, erasing everything we knew, including the feel. It happened one pixel at a time.

Toby Daniels

Chris Perry

Gradually, Then Suddenly

As the saying goes, Photoshop’s impact hit gradually, then suddenly. Steve Jobs introduced the Macintosh in 1984 with his famous Super Bowl commercial. It featured an actual sledgehammer thrown at conformity. Jobs didn't say then that his bicycle for the mind would become a wrecking ball for certain kinds of creative work—my Dad's era of creative work.

The Mac led to new software, most notably Photoshop. The effect wasn't immediate but gained momentum by the early 1990s. Jobs' beige computer boxes started showing up on more desktops in the workplace. Once new software was loaded into them, the creative rhythms changed. The click-click-click of mouse buttons began to drown out the scratch of pencils and the squeak of Pentel marker tips.

My Dad and his band (the present company included) didn't adapt fast or fundamentally enough. Revenue and margins shrank as clients took the work in-house. We doubled down on what we knew, only accelerating our demise. A new technology, and those who knew how to use it, dismantled the business in about 24 months.

We should have seen the hammer coming because it was already there. The lesson: Not reacting to a technological wave until it breaks over you is more than just a business failure—it can be an enduring, personal failure. I promised myself never to be caught so ill-prepared again.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

Magical Automated Workflows

Photoshop turned 35 last week. If you were to imagine a single icon representing a creative transformation, it would be the "Magic Wand." One-click. That's all it took. A click and similar colors were selected as if by sorcery. Same with a click to alter the composition of an image. Ditto for envisioning variations of a scene.

What had once required careful technique and training was now available to anyone with access to the software and the patience to play and learn with it.

The magic wasn't just in what the wand could do but in what it represented. The transition from physical to digital craft offered efficiency.

Ironically, Photoshop—and the efficient workflows it led to—also came from family. Here is a bit of backstory on how it came to be.

Thomas and John Knoll grew up in a house that valued art and technology. They stood at the crossroads of two worlds, uniquely positioned to bridge them.

In 1987, Thomas Knoll wanted to display grayscale images on his Mac's black-and-white monitor. It was a practical problem with what seemed like a limited solution. His brother John, working at Industrial Light & Magic, saw further. He convinced Thomas to expand the program to handle color on the new Macintosh II. What began as a personal project caught the attention of the industry's power players.

A capability previously reserved for mainframes could now run on a PC. Adobe recognized the potential immediately, securing distribution rights in 1988 and releasing Photoshop 1.0 for Macintosh in 1990.

Photoshop didn't transform creative work in isolation. PageMaker, released by Aldus in 1985, opened the door to what would become known as desktop publishing. Photoshop and PageMaker encoded creative techniques and integrated workflows into software that creatives and producers could use directly. Those in the studio world who didn’t adapt to augmentation and changing workflows were permanently displaced.

The Magic Fades Without Imagination

Decades later, a much bigger automation wave is building. Intelligent software is reshaping all creative and knowledge work, echoing what happened in our studio.

Some automation parallels are striking. Both represent shifts from manual to digital creation and spark similar existential questions about the value of work and the identities of those who do it. With desktop publishing, page designers and typesetters rightly feared obsolescence. Today, anyone who produces knowledge, research, or creative work naturally expresses the same concerns about generative AI.

There are also critical differences between automation then and now.

Desktop publishing tools were extensions of human work. They replicated technical aspects but required direct human guidance for every decision. Generative AI tools can generate work with minimal direction, shifting human expertise from execution to curation.

Desktop tools required understanding design principles, but generative AI can produce seemingly decent output without the user knowing the underlying fundamentals.

Perhaps most significantly, the desktop publishing revolution unfolded over a decade while generative AI's capabilities are evolving at an incredible speed.

The importance of judgment, discernment, and taste in delivering commercial-grade work remains unchanged, whether we’re talking about Photoshop thirty years ago or OpenAI’s latest reasoning model today.

Consider Photoshop's meaning as a metaphor for the current moment. It automated specific, repeatable, known tasks and made technical processes faster and more accessible. However, it could not replicate the mystery of creativity or imagination, which no software has yet managed to do.

And therein lies a twist. Looking back, what reads like a family business failure doesn’t tell the whole story.

After experiencing the destruction of our business, my path reflects possibilities as a new technology destroys and creates simultaneously.

Alongside highly inventive colleagues and clients, I’ve helped create and grow new businesses built on mobile computing, e-commerce, weblogs, social networks, digital content, app development, community management, digital video, and social intelligence.

We capitalized on these tech breakthroughs not merely by understanding their specifications or original use cases but by seeing what they could lead to—by tapping into our creative capacity to imagine and bring new possibilities to life. AI is the next frontier on which to build.

Yes, AIs can encode what has been and suggest probabilities for the future. They can analyze patterns from the past with astonishing accuracy. But they cannot predict how we'll ride the next wave.

The hammer will fall, but what emerges isn’t simply destruction or dead ends. There can be a lot of light in and at the end of a transformation tunnel.

Neither my Dad nor I fully understood it in a moment of failure. It’s a reminder—then and now—that it’s hard to read the label of the jar you’re in. You have to see it from the outside.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

Are we auto-tuning ourselves into obscurity?

Editor’s Note: Henrik has a knack for provocations. After all, he coined the term ‘donkeycorns.’ In this post, he reflects on the unintended consequences of AI polishing. Do you agree with him?

Remember when auto-tune in music was subtle, a hidden trick to perfect a vocal track? Then came T-Pain and the era where auto-tune became not just visible but celebrated—a feature, not a bug. The same pattern has played out across social media: Instagram filters, touched-up LinkedIn headshots, AI-enhanced profile pictures. We've moved from hiding our digital enhancements to flaunting them.

If you use AI to polish your LinkedIn profile, it will suggest improvements to your bio, enhance your profile picture, and help craft the perfect humble brag about your recent accomplishments. The result is objectively "better"—more professional, more engaging, more likely to attract opportunities. But does it give you an edge when everyone can access the same tools?

What happens when perfection becomes commoditized? When anyone can project an idealized version of themselves? As AI makes perfect self-presentation available to everyone, the value of that perfection plummets. When anyone can generate an idealized AI headshot or have their writing polished by ChatGPT, what becomes scarce—and therefore valuable—is authenticity.

This creates a fascinating paradox: we begin manufacturing imperfection. Using costly signals to demonstrate a lack of costly signals. It wouldn't be the first time. British aristocrats historically showed their status through deliberately shabby clothing (which had to be the right kind of shabby). There's an inverse relationship between the cost of a designer handbag and the visibility of its brand mark. The ultimate flex is not needing to flex at all.

Toby Daniels

Henrik Werdelin

Perfection and Intimacy

In a world where anyone can present as perfect, imperfection becomes the new premium—but it can't be just any imperfection. It must be curated imperfection, the kind that signals authenticity without looking careless. A perfectly unpolished selfie. It's an AI-written post with just enough human awkwardness (or Danish spelling mistakes) left in. A bio that feels refreshingly unoptimized.

Our quest for perfection isn't new. Our drive for self-improvement and presenting our best selves is highly adaptive. We know intuitively that polishing our presentation can open doors and create opportunities. There's an evolutionary logic to this impulse; after all, we want to be attractive to those we wish to attract.

But we also know, bone-deep, that being truly seen and accepted for who we are—messy, imperfect, human—is what allows us to form genuine connections. Vulnerability creates intimacy. The things we try hardest to hide—our struggles, fears, and insecurities—are precisely what connect us to others. When someone trusts us with their vulnerabilities, we feel chosen, and it's only when we share our own that we feel truly known.

AI brings this ancient tension between wanting to impress and connect into sharp relief. When we can present a perfectly polished version of ourselves, we're forced to ask: What are we optimizing for? Do we want to be admired or understood? How do these choices shape who we become?

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

But we also know, bone-deep, that being truly seen and accepted for who we are—messy, imperfect, human—is what allows us to form genuine connections.

I've built an AI system to track what I eat and give me feedback. I've tried other calorie-tracking mechanisms but found I tended not to report what I wasn't proud of. That doesn't happen with this one because the AI doesn't judge if I overindulge. On the other hand, it doesn't care. At all. So I still send meal photos to my human personal trainer. In the "attention economy," AI can replicate the mechanics of attention, but not the meaning of it.

This dynamic plays out across our digital landscape. LinkedIn is likely full of posts written by ChatGPT, which get posted unread and then copy-pasted unread into ChatGPT, which produces a thoughtful comment that gets posted unread. Yet people still avidly read the AI-written comments on their AI-written posts. Why? Perhaps because even artificial attention scratches a very real itch for recognition.

Perhaps the interesting question isn't whether AI will increase isolation or intimacy, but how it will transform our understanding of connection itself. Just as social media didn't replace friendship but changed how we think about it, AI may redefine how we express our need to know and be known. The truly interesting developments will come when we stop using AI merely to make ourselves look good and start discovering what new forms of connection become possible because of it. So what about you? What will you choose to keep imperfect, and where will you autotune yourself?

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

Don't

Listen to Us,

Listen to

Them

Announcing our first round of summit speakers

Learn more

Editor’s note: After 18 months and hundreds of events, conversations, and activations, we curated a group of executives who have a real story to share about practical AI transformation. These are the people who are solving small for the enterprise.

No one comes to an ON_Discourse event to hear me or anyone on my team speak. We are provocateurs, not pontificators. We leave that to the real experts - the executives who are building, leading, doing hard things that will move markets. This type of expert is hard to find in the AI era.

These experts are driving meaningful AI adoption at the enterprise level. To do this, they abandoned epic and hyperbolic AI theories in favor of practical, immediate investments that improve business outcomes today. We call that solving small, and we organized a summit around their stories.

And on March 26, we are going to provoke them into telling their story in a new way. I am excited to share our first round of confirmed speakers for the Solve Small ON_Discourse Summit on AI Transformation.

Setting the stage with the Solve Small mindset—why small, tactical shifts lead to big impact.

Dan Gardner

Co-Founder and Executive Chairman, Code and Theory, ON_Discourse

The rise of Agentic Managers and what they mean for the future of management, leadership, and productivity.

Katherine von Jan

Founder, CEO, Tough Day

How a global marketing team is integrating AI, not just experimenting with it.

Don McGuire

CMO, Qualcomm Incorporated

The AI guardrails every business needs—and why the entire C-Suite needs to get on-board.

Mark Howard

President, COO, TIME

In the coming days and weeks we will announce more provocateurs, agitators, builders and makers who are driving enterprise level transformation from the bottom up.

What to expect at The ON_Discourse Summit

This is not another AI conference filled with high-level platitudes. ON_Discourse is designed for those leading AI transformation inside the enterprise—across functions, across teams, and across the C-suite.

  • Sharp, provocation-driven keynotes that move beyond theory and into action.
  • Small-group discussions designed to generate practical, real-world strategies.
  • A cross-functional approach that goes beyond AI as a tech initiative to AI as a business transformation tool.

Why Solve Small?

Big AI transformation stories dominate the headlines, but the most meaningful change happens at a smaller scale:

  • Small is implementable—today, not next year.
  • Small is iterative—it can fail, adapt, and evolve.
  • Small is tangible—it moves beyond theory into action.
  • Small is powerful—because when compounded, it leads to massive transformation.
Learn more
Learn more

Thank You!

You are all set. Your membership is now active and we will see you at the Summit on March 26.

Spread the Word

Let your clients and your network know about the Summit.

Check Your Email

We will be sending you an official onboarding email which includes details on your membership benefits, event access, and how to connect with the community.

Follow ON_Discourse

Listen to the Podcast and subscribe to the Newsletter.

Download Image Share on Linkedin

Thank You!

You are all set. We will see you at the Summit on March 26.

Spread the Word

Let your clients and your network know about the Summit.

Check Your Email

We will be updating you with more information about the Summit there.

Follow ON_Discourse

Listen to the Podcast and subscribe to the Newsletter.

Download Image Share on Linkedin