Co-
Produced
with AI

The Promise,
The Threat,
and The Mirror

Toby Daniels

ON_Discourse Editorial Team

Editor’s Note: During Cannes Lions, we partnered with the team from LTX Studio to host a debate on AI’s role in the creative process. LTX Studio is an AI-powered creative co-pilot, an all-in-one platform that thinks like a creative and moves like a producer. Ideate, create, and deliver high-quality video content with full creative control.

When we sat down at Cannes Lions with LTX Studio, we didn’t promise easy answers. We promised friction.

Our event, Co-Produced with AI, wasn’t a panel, a pitch, or a product showcase. It was a test. For LTX. For AI’s role in creativity. For all of us.

LTX Studio is your AI-powered creative co-pilot—an all-in-one platform that thinks like a creative and moves like a producer. Ideate, create, and deliver high-quality video content with full creative control.

Because the future doesn’t announce itself with certainty. It shows up as tension, contradiction, and a good provocation.

Creativity at the Speed of Light…or the Speed of Thought?

We opened with a question that no brand, agency, or platform can ignore: Was creativity ever meant to be efficient?

AI gives us the gift of speed: faster outputs, faster clarity, faster everything. But at what cost? In the room, there was real tension. Does speed strip away the uncertainty, the wandering, the procrastination that lets taste mature and instinct sharpen?

Ido Cohen, from LTX believes working with AI as a creative partner is less about efficiency and more about exploration:

I recreate Ariana Grande videos in LTX Studio. I never get the result I want. But it opens new gates. I see possibilities I didn’t see before. That’s the point. The generation isn’t the end, it’s the start.”

Another of our participants made the point that speed might be the antithesis to creativity:

If speed becomes the creative edge, what happens to deliberation, serendipity, or even procrastination?”

Corbett Drummey from LTX reframed the entire provocation by stating “You have to take the time to edit, and then edit again. Without editing, everything AI produces ends up being average”.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

Taste: The Last Human Skill?

Taste became our battleground. Can AI generate taste? Can it learn taste? Can it fake it well enough that it doesn’t matter? One participant captured the humanist stance:

Taste is exposure. It’s obsession. It’s lived a experience. AI can mimic but not possess it.”

But not everyone agreed. If taste evolves, and if machines can help shape that evolution, maybe we’re clinging to an old definition. If taste lives in the eye of the audience, does it even matter whether it’s human or machine-derived?

The Risk of Risklessness

Our final provocation hit the gut: Do you have the guts to follow your gut?

When AI can simulate every outcome, predict every reaction, and de-risk every idea, what’s left of creative courage? Are we building tools that support risk, or tools that eliminate the need for it? As one participant put it bluntly:

Democracy won’t drive guts. Tools that promise efficiency don’t reward risk.”

And yet, another voice reframed the threat:

Generative AI doesn’t end the process—it starts it. It’s not a threat to your gut. It’s a mirror to it.”

LTX Studio: The Co-Pilot, Not the Pilot

For LTX Studio, this was the test they wanted. Could they hold space for these tensions and still demonstrate a platform that amplifies, rather than flattens, human creativity?

They did. The room walked away seeing LTX not as a button that generates content, but as a creative co-pilot that forces new decisions, surfaces hidden instincts, and invites taste to show up on the page.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

A Year

In Discourse

Getting Comfortable with Uncertainty

Editor’s Note: We provoked our co-founder to get introspective about 2024. Unsurprisingly, he turned it back to the discourse. We think it’s a good reflection of the sentiments we keep hearing in our events.

It’s hard for me to fully take stock of the year we’ve just had. You probably feel the same. For me, 2024 was about being in a constant state of interrogation.

Together with my team and with Matt Chmiel at my side, I’ve hosted summits for Fortune 100 brands, countless group chats, in-person roundtable discussions, podcast conversations and I’ve personally interviewed hundreds of business executives, startup entrepreneurs, technology experts and investors. I’ve listened, transcribed, distilled and synthesized. We’ve published multiple reports and over 100 articles.

During all of this I have attempted to embody the values we established when we first started ON_Discourse: Provoke, Listen, and Change. It’s not always easy. Group think is the antithesis of these values. People are so sure of themselves. They are also mostly wrong. We all are. Especially about the future, and almost certainly when it comes to AI.

What I have learned—and what I am certain of, and what I believe we must carry forward into 2025—is this: the ability to provoke new ways of thinking and adapt to ambiguity is no longer optional. It is the foundation of modern leadership.

Toby Daniels

Toby Daniels

AI: The Mirror We Didn’t Expect

When we asked our members earlier in the year if they would implant a neurochip to eliminate mistakes, the responses revealed far more about humanity than technology. One CEO’s words resonated with me so much: “What if our mistakes are what make us human?"

Throughout 2024, AI forced us to question everything—creativity, empathy, work itself. SaaS companies watched traditional models erode as AI introduced per-seat chaos. Meanwhile, leaders marveled at AI tools that seemed to wield emotional intelligence, leaving us both amazed and unsettled.

One member shared a provocation I can’t shake: “AI can make us more emotionally intelligent—if we allow it.” Yet this year made me less certain than ever. Should we let AI shape our humanity, or must we shape it first?

Spatial Computing: The Future or Another Hype Cycle?

When my cofounder Dan Gardner shared the provocation, “Spatial is not the new smartphone; it’s the next internet,” during a summit we held for a Fortune 100 brand, it sparked such a visceral reaction in people, it was fascinating. During the course of the summit we debated whether spatial computing’s promise was transformative or just pattern-matching old narratives onto new tech. Remember, at the start of the year, we wrote about Vision Pro and by November, Apple had announced it was winding down manufacturing of the device. But Meta also announced Orion, its mixed-reality glasses, which was almost universally well received. In one year we’ve gone from thinking we understood the future, to having serious doubts, to feeling almost certain again. We’re basically wrong, most of the time.

This is the tension we love. The difference between defining and exploring is always palpable. Spatial computing isn’t just a technology; it’s a challenge to how we see and name our future. What if the struggle to define it is the point?

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

The Content Paradox

At the start of the year, I made the claim that content had been commoditized by AI. But something deeper emerged: a yearning not for content, but for connection. In our closed group chats, we noticed a trend toward trusting tastemakers over algorithmic discovery. One of our members admitted that they sometimes just want to turn on a fast channel and watch whatever is playing. Algorithmically driven recommendations and decision fatigue are both real.

“If content is endless, what we seek is not more of it but something we can trust—a human touch amidst the firehose.”

Technology Meets Emotion

This year, technology blurred the line between utility and intimacy. At another one of our enterprise summits that explored AI and the connected home, an attendee shared how empathic AI and digital twins could transform our homes into emotional ecosystems. But these developments also begged harder questions: Should tech meet emotional needs? Or are some things better left untouched?

One leader put it plainly: “Tech has historically failed to serve emotional needs. That is changing.” Whether we are ready for this shift remains uncertain.

2024’s True Gift: Uncertainty

As the year ends, I find myself drawn less to the answers and more to the spaces where questions thrive. ON_Discourse has become a community not of solutions but of shared exploration.

One member described it perfectly: “This is where curiosity meets rigor.” Another offered a simpler truth: “This is where we admit what we don’t know.”

I don’t know what 2025 will bring, but I know this: Wrestling with uncertainty is where we grow. Together, we will keep asking, keep listening, and keep discovering. Because the questions themselves are the point.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.

Evaluating AI

Effectiveness

Do what you love and let AI do the rest.

Matt Chmiel

Toby Daniels

Editor’s Note: This is a dispatch Toby wrote from his recent trip to Web Summit in Lisbon.

“When you deploy AI in your business, you have two fundamental paths: you can either enhance the things people love doing or eliminate the things they hate doing. This distinction might sound basic, but it’s crucial.”

This is what Nicholas Durkin, the CTO of Harness, an AI-delivery platform said during a roundtable discussion that Dell and NVIDIA hosted during Web Summit in Lisbon.

Take AI code generation, for example. Developers generally love writing code—it’s their craft and their passion. But when AI tools like code generators were introduced, there was pushback. In fact, recent DORA metrics showed that developers using AI code generation tools were less efficient than they were before adopting them. Why? Because these tools inadvertently disrupted the part of their job they enjoy most—writing code.

Durkin went on to say “It’s like telling a chef, we’ll handle the cooking for you,” but leaving them with all the prep and cleaning instead. Chefs thrive on the act of cooking; they don’t want to lose that joy. Conversely, if you use AI to handle the worst parts of the job—like prep work or cleanup—you empower the chef to focus on what they love. This approach doesn’t just maintain their passion; it enhances their ability to excel.

Discover more discourse directly in your inbox.

Sign up for the ON_Discourse Newsletter.

SUBSCRIBE

It’s like telling a chef, we’ll handle the cooking for you, but leaving them with all the prep and cleaning instead.

I spent time with Nick after the roundtable and went much deeper into this topic with him, he went on to outline a model that he uses with his clients. He considers these three metrics when evaluating AI effectiveness:

  1. Efficiency – Does it make processes faster?
  2. Reliability – Is the output consistent and of high quality?
  3. User Experience – Does it make people feel good about their work?

But the most critical factor is alignment with people’s passions. If your AI diminishes the best parts of someone’s job, you’ll face resistance. If it tackles the worst parts, people will embrace it. Focus on “love” and “hate.” Build AI for things people love to do but can’t due to limitations, or for things they can do but don’t want to because it’s boring or repetitive. That’s where AI can make the biggest impact.

We run events every week. If you want to participate, inquire about membership here. If you want to keep up with the perspectives that we hear, you can subscribe to our weekly newsletter.