Event Preview:

Toby Daniels
Founder, ON_Discourse

On Wednesday, August 9, ON_Discourse hosts ‘Good Artists Copy, Great AIs Steal,’ a private dinner for Premier Members and specially invited guests in the entertainment, media, and tech industries.

We chose entertainment as our subject because we are at a critical juncture. 

AI’s potential impact in entertainment is just the tip of the spear. It has swept through almost every industry, and this year we have seen these new powerful tools begin to redefine how we think about creativity, ownership and attribution and distribution. Even the economics of the business are being completely reexamined. 

At ON_Discourse, we believe new perspectives can only be found through discourse and disagreement and our mission is to build a new kind of media company, built on this idea and focussed on providing an exceptional level of value to our members through perspective, not just content, and through relationships, not just connections.

In the coming days we will publish a series of articles that build upon the Good Artists Copy, Great AIs Steal theme and explore the topics from a number of different perspectives. We will also announce future member events in the coming weeks.

Do You

Even Know

What

It Means

To be↓↓↓

Dan Gardner
Co-Founder & Exec Chair, Code and Theory
Co-Founder, ON_Discourse

Creativity isn’t the ability of the skill, it’s the process of thinking before doing it.

I get it, AI makes a creative person feel uncomfortable. Facing this intersection of creativity and artificial intelligence (AI) might cause a ripple of discomfort, particularly if you’re someone who has dedicated their life to honing creative abilities. It’s understandably disconcerting to contemplate the idea of an AI system challenging your unique capacity for creativity – a quality you’ve always attributed to your personal skillset.

CREATIVE

AI has made me think a lot about creativity recently. I co-founded and help operate a creative agency that employs over a thousand people, all of whom identify as “creative". Throughout my tenure as a leader in this creative agency, I am often asked, “When did you first see yourself as a creative?" The question always strikes me as odd, as I believe that I have been innately creative all my life. From a young age, I indulged in painting and drawing, eventually developing a fascination with photography – all pursuits traditionally deemed “creative". When I reflect on those early days of education, and even what transpires with my children’s schools today, I recognize this fundamental premise.

Children who display skills in areas like drawing, painting, writing, or performing are typically labeled as "creative". Schools, given sufficient resources, will nurture these abilities. Conversely, children who lack such skills are deemed "not creative" and steered towards the acquisition of more practical “non-creative” skills. 

This dichotomy perpetuates itself in the professional world, with creative agencies or even in entertainment industries distinctly separating the “creatives” from the rest. So, it’s understandable why creatives may fear new technology: their entire self-concept, built on their unique skills, feels threatened.  

In the context of AI, many have started to express apprehension, suggesting that this technology could undermine creativity. I hear many arguments against its use. Even the writers union strike has some restrictive usage of the technology to protect themselves.

It is my belief that creativity is a skill, but not a skill of the practitioner, instead a skill of thinking. AI becomes a tool to enable thinking in new and profound ways. Just like digital photography didn’t kill the discipline of photography because you can take photos on your iphone and now dodge and burn in Photoshop or use Instagram filters instead of a dark room, these new AI tools are new enablers to new kinds of creativity. But a creativity only a few people will be lucky enough to participate in. It will fundamentally change the way we think of early education and the role of creativity in business and entertainment.

This is the new reality that AI will force us to come to terms with; Not everyone is as creative as they thought. What people deemed as creative, the skill of doing something, is becoming a commodity.  That creativity will go from the 1% of people who think they are creative to a new reality of maybe just .01% of people who actually are creative. 

Just like traditional artists. Only a very select few get to make successful careers from making Art. There is no entitlement to the career. And just like many young adults who graduate from art schools every year but sadly cannot make a career out of their skillset, the same will be true with legacy degrees. 

Andy Warhol was famous for having an entire factory executing his concepts, but he was the brain behind each unique idea. Sadly, we can’t all be Andy Warhol.  And the new factory is AI, not people.

Demanding that the creative industries limit the use of AI is misguided. Not only is it virtually impossible to contain this technological advancement, but it’s also shortsighted. It’s akin to bookkeepers resisting Quickbooks due to fear of obsolescence, or coal miners protesting clean energy innovations. Imagine if we had halted the industrial revolution due to fears about job losses.

However, there is a counterargument. Creatives don’t protest against the actual innovation of AI; their objection lies in the idea that AI is a thief. It’s not about the automation of the tooling but the data AI steals from.

But as the saying goes...

Good

Artists Copy

Great

AIs Steal

The advent of AI has shaken up traditional notions of creativity and its value. Many fear that AI is essentially “stealing” creativity, a unique attribute that should be fairly compensated. The belief is that if you use a creative output to generate something new the original idea needs to be fairly compensated. After all, if someone else profits from your original idea, shouldn’t you get a share? This seems reasonable and hard to dispute. 

But what makes creativity unique and therefore valuable? Picasso offered a profound answer when questioned about the worth of his art.

When told, “But You Did That in Thirty Seconds.”

Picasso replied, “No, It Has Taken Me Forty Years To Do That.”

This implies that the value lies in the journey, not just the end product. 

However, Picasso also famously said,

Good Artists Copy, Great Artists Steal

He was implying that the best creativity actually is just old creative ideas revived in a new form. Does this mean no creation is entirely original but rather a derivative of something else? If so, what does “derivative” mean in the context of creativity and human cognition?

Does the person who creates a masterpiece in what seems like thirty seconds, with each stroke backed by their life’s experience, get full ownership of that new thing? In the age of AI, that’s just a nice little anecdote. Now we’ve got machines, bereft of any life experience, churning out 'creative' output at the speed of light.

For centuries, humans have patted themselves on the back for their ability to take historical ideas, toss them around, and present them as something 'unique' and 'original'. Conveniently, we ignore where these ideas came from, lauding the result as claimable and unique. 

Consider Quentin Tarantino, who is hailed as a creative genius. He openly acknowledges the influences that shape his work. His creative process involves drawing from the past to mold his future ideas. Should he have paid the Film Noir greats some royalties because they influenced his style? If he entered a generative AI prompt instead, “in the style of Film Noir,” does all of a sudden that require a different payment for his creative influence?

Perhaps genuine creativity involves reshaping and recontextualizing historical ideas into something unrecognizable from the source. The human mind naturally (and sometimes consciously) does this. But as the origins are typically hidden, the end product is labeled as unique, inventive, original, and therefore, claimable. So, the creator retains a perpetual claim to any benefits derived from it.

But what happens when AI mimics this process of reshuffling and reconstructing ideas to produce something new? If the process is algorithmic rather than instinctual, does it strip the output of its creativity because its origins are more apparent? Is it not considered theft when a human does it, but when an AI does it, it is?

Wait… but…

What happens when AI mimics this process of reshuffling and reconstructing ideas to produce something new? If the process is algorithmic rather than instinctual, does it strip the output of its creativity because its origins are more apparent? Is it not considered theft when a human does it, but when an AI does it, it is?

Good Artists Copy, Great Artists AND AIs Steal.

Could it be that Picasso was wrong? Or perhaps our understanding of creativity and ownership needs reconsideration?

In the end, the great AI invasion has forced us to reassess our holier-than-thou understanding of creativity and ownership. It’s high time we stopped hiding behind romantic notions and accepted that both human imagination and AI innovation are here to co-exist, whether we like it or not.

And this old romantic notion of creativity and ownership assumes that entertainment in the future will be like today. A model where someone thinks of an idea and the idea is executed and then distributed to the masses. Maybe tomorrow, it’s the viewer that dictates what will be created, not the creator. Is this the end of the creator economy?

What happens when AI mimics this process of reshuffling and reconstructing ideas to produce something new? If the process is algorithmic rather than instinctual, does it strip the output of its creativity because its origins are more apparent?

Is it not considered theft when a human does it, but when an AI does it, it is?

This is potentially the advent of a new form of entertainment…


The Future of

Entertainment:

Do we even know what we're asking for?

It's

Anticipatory

and

Ephemeral

What happens if only one prompt ever has to be said: “Give me more value”

Remember when being the first to come up with an idea was the big deal? Everyone hailing the “genius” who thought of something new? We used to think that creativity was something valuable and special, the thing that kept us entertained. 

For generations, we’ve held creativity on this high pedestal. We’ve marveled at the genius of the innovators, the artists, the disruptors. We’ve believed that creativity – the ability to generate something truly original and new – is a uniquely human trait. But let’s be real: do we even need that shiny, fresh-out-of-the-box creativity anymore? Or are we just craving the illusion of something “new and improved” to keep us entertained?

In the world of Web 2.0, personalization means leveraging location, device, and intent to tailor an experience. You could see it in Netflix’s recommendations, Amazon’s suggested products, or targeted ads. Of course, it’s all backed by those good old algorithmic engines that some may argue lack creativity. They’re just doing their job, providing value based on perceived needs.

But now, we’re venturing into a whole new territory. A world where we can prompt a system and voila – out comes something fresh, something creative. But what if these prompts weren’t manually entered? What if they were behaviorally driven, shifting and adapting to our ever-changing needs and desires? What if generative AI could whip up something personalized on the fly? It would be more than personalized, it’d be anticipatory.

Can our behaviors make us creative? Or are we just wandering in an ever-evolving maze of our own creations, no longer needing to come up with anything new? Or are we basically there now?

Imagine taking a scriptwriting class. You’d quickly become acquainted with the familiar pattern found in nearly every movie: protagonists, antagonists, story arcs, resolutions, and so forth. This formula can be identified in almost any story.

Consider Disney’s method of reusing animation, resulting in various movies with shared scenes, such as 'Winnie the Pooh' and 'Jungle Book.' Reflect on the choice to reshoot Samuel L. Jackson’s line in 'Snakes on a Plane' because of its anticipated impact on audiences. Many of the top hits on Fandango currently are sequels, remakes, or stories spun from established franchises. Or look at 2 recent TV hits, 'Yellowstone' and 'Succession,' weave strikingly similar tales, only tailored to different demographics through congruent themes.

So, what are we heading towards? A future where entertainment is tailored so precisely that it’s practically reading our minds and serving up ephemeral delights?

What does that mean for our requests in the future? Would we be reduced to uttering one prompt: “Give me more value"? 

With AI and LLMs, we’re entering an era of 'machine creativity'. These systems can process and analyze massive amounts of data, draw from a vast pool of existing content, and generate responses tailored to individual user needs. They don’t just mimic human instinct; they go a step further by making data-driven decisions that can predict and cater to our needs with astounding accuracy.

Is there room for disruptive human creativity in this new landscape? Perhaps. But as LLMs continue to improve and evolve, these instances will become increasingly rare, and more importantly, they may not be necessary. After all, if a machine can fulfill our needs and desires based on our own behavior and preferences, do we need the occasional disruptive idea?

As we stand on the cusp of this new era, we may need to reassess our long-held notions about creativity. Is creativity really about originality, or is it about delivering value in the most effective and satisfying way possible? Is our pursuit of creativity overrated, particularly when AI systems are capable of delivering more value with greater efficiency?

Perhaps, in the end, we’ll find that 'Give me more value' is the most creative prompt we could ever ask for. It’s a directive that has the potential to render traditional creativity redundant, replacing it with a more accurate, efficient, and user-centric approach to satisfaction and fulfillment. And who knows? We may find that this approach fulfills us in ways we never thought possible.

As we navigate this transition, I’m not sure if any of this will be true, but one thing remains certain: the paradigms of entertainment and creativity are shifting. How content is crafted, delivered, and consumed might be starkly different in the future than it is today.

 

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

More

Good Artists Copy Great AIs Steal Recap

We Are Algorithm-ing Ourselves Into a Monoculture

Would You Let Netflix Read Your Mind?

Good Artists Copy, Great AIs Steal

How Artists Turn AI Into Gold

A Brush with AI: The Copyright Fight For Digital Creativity

HOW ARTISTS
TURN AI 
INTO GOLD 

Instead of fighting AI,
turn it into an
artist’s marketplace.

Stephen Parker
Creative Director at Waymark, a Detroit-based video creation company

Generative AI has sent a shockwave through Hollywood as creatives and studios debate its impact on the entertainment industry.

For studio owners and AI evangelists, generative AI has the potential to be an industry-disrupting tool for streamlining creative processes and getting projects out the door faster than ever previously possible. Many creatives and artists, however, argue AI technologies pose an existential threat to their livelihoods. 

AI will have far-reaching impacts on creative markets as AI systems like DALL-E 2 and Midjourney refine their products and advance their models, laying the groundwork for new AI systems and applications we’ve yet to fully realize.

While none of us has a clear picture of what this will look like for the entertainment industry, it’s evident a new model for protecting IP is needed to ensure creators can continue benefiting from their human output alongside this emerging innovation.

AI licensing could offer a solution.

Generative AI in entertainment is an industry-defining debate with far-reaching implications that could impact everything from contract negotiations to the way we interpret original art in the future.

Already, there are several lawsuits underway alleging that generative AI systems were unlawfully trained using various authors’ work without their permission. Depending on how generative AI systems are regulated in the U.S. or abroad, we may have a better idea of whether those lawsuits hold water.

In the meantime, creatives have more power than they may realize in the current Wild West of emerging generative AI technologies — perhaps even through a platform that supports continued or alternate world-building for their existing stories and projects.

and
The Copyright Fight
For Digital Creativity
or
Against
AI Art

An untapped wealth of IP

Plenty of creatives already allow their personalities, concepts, and writing to be licensed through brands and ad partnerships (Ryan Reynolds is one such ubiquitous celebrity who excels at image marketing).

These deals are executed under very specific contractual terms beneficial to both parties, and most seasoned actors have a considerable degree of say about what they do on camera. So why should it work any differently with AI?

The debate over generative AI’s use in entertainment production arrives at a poignant moment for the industry; the Writers Guild of America strike began in May and a strike by the actors’ union SAG-AFTRA followed shortly thereafter in July.

And make no mistake: Studios are already AI future-focused. SAG-AFTRA maintains that studios wish to use actor likenesses at will and without compensating those individuals (the Alliance of Motion Picture and Television Producers denies this characterization of its AI proposal). 

I’m not pitching a dystopian “Joan Is Awful” scenario wherein artists are exploited and their digital avatars are able to be used by content-greedy streamers or studios at will, as SAG-AFTRA has alleged was proposed by AMPTP.

But should a screenwriter, actor, or director be looking for new opportunities to capitalize on their oeuvre, they’d be wise to consider an AI framework for licensing. 

I’m proposing a reasonably straightforward concept: a hybrid creative-licensing platform that allows artists to license their likeness, aesthetic, or concepts for use as input for outside projects, creating an ethical — and importantly, lucrative — funnel by which all parties benefit and no copyright infringement or plagiarism of existing works occurs.

Part of the reason artists are so miffed with systems like ChatGPT and Midjourney is they are alleged to have used work and ideas without the express permission of the original authors.

But imagine that an author did allow their ideas to be used as part of a new and entirely unique AI output, with terms protecting the degree of use and how the output can be shared, for how long, and with explicit distribution royalties in place to compensate original authors whose work was used as a prompt. 

One way that this could potentially manifest is through a platform that allows brands, studios, actors, and writers to license their creative output to niche communities like fandoms.

With years between iterative games, books, TV shows, and film series, such a platform would find tremendous success with communities of artists and creatives interested in worldbuilding for purposes outside of commercial marketing.

Harry Potter fans, for instance, would almost certainly pay to expand narratives for their favorite characters, creating entire backstories, sub-narratives, and alternative timelines.

And should such a platform bill itself as a creative community rather than one meant to generate monetizable output, the licensing would be far more appealing to artists looking to capitalize on specific fan followings.

Make no mistake, such a model could be incredibly profitable for all involved parties, including media companies, established industry heavyweights, and emerging creators with significant social followings — specifically where it relates to world-building within existing franchising.

An AI licensing platform could easily filter for specific motifs or genres, and sophisticated terms could provide expansive protections to any creative willing to license their work or likeness through the service.

I envision a platform capable of filtering by specific tags, allowing paying subscribers access to rich data sets from their favorite artists and creatives.  

There are several benefits to third-party licensing that could even the playing field for the entertainment industry at large. For one, actors, screenwriters, and directors could set their own terms and decide how, and for what, their work is legally permitted to be used as input.

Studios with over a century of IP collecting dust, like Disney, could potentially license that material to allow a new generation of creatives to make it fresh again. Or maybe even your normal, everyday franchise enthusiasts could potentially use the service to generate a final season of a canceled show they adored. The possibilities for use are virtually limitless.

Why pay for what's already free?

While conversations about how we navigate our AI future are of paramount importance, it’s also true that generative AI systems are still very much in their infancy.

You have to be fairly technically adept today to create something of high quality with the current suite of widely available generative AI tools. Even short-form projects developed with AI technologies require a lot of work, a lot of training, and a lot of frameworks both technical and conceptual. 

AI technology for developing high-caliber moviemaking is simply not at a plug-and-play stage. And the easier and more accessible the tool, the greater the potential for the product to generate money.

Everybody has a niche, and when they’re able to dial into that niche, that’s when the slot machine starts to ping. 

If you’re wondering why people wouldn’t just use a DALL-2-type model to do this on their own, that’s easy. The quality of the AI output would be lightyears better and more specific for artists and creators who are interested in working within a specific artistic motif.

If you’re licensing from the creators themselves, you’re going to get a richer body of prompts because you won’t have to sift through a randomized heap of garbage pulled from the internet — which can (and does!) lead to very bizarre interpretations of written prompts by generative AI models.  

Generative AI is still very much an emerging and primitive technology where video is involved, and they’re unlikely to soon entirely replace human beings in the ideation, development, and output of fully realized creative works (e.g. short films or scripts that don’t require a significant degree of human input).

We’ve also yet to see how copyright protections will work once policymakers begin regulating AI. A number of policy proposals may require generative AI models to disclose how they’ve trained their systems, again raising questions about copyright infringement and how creative works can or cannot be used in the generation of AI output.

At the same time, a creative-licensing platform like the one I’ve proposed may offer a solution from which everyone benefits. And most importantly, it gives back to artists and creators the creative control over their intellectual property while they’re laughing all the way to the bank.

Discover More
Discourse

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

More

Good Artists Copy Great AIs Steal Recap

We Are Algorithm-ing Ourselves Into a Monoculture

Would You Let Netflix Read Your Mind?

Good Artists Copy, Great AIs Steal

How Artists Turn AI Into Gold

A Brush with AI: The Copyright Fight For Digital Creativity

A Brush

With AI

The Copyright

Fight

For Digital

Creativity

The U.S. Copyright Office
needs to adapt to the AI Future.

Generative AI presents a tremendous opportunity to unlock human creativity in ways that never could have been imagined when the concept of copyright emerged.

But the U.S. copyright regime has become an obstacle to this new era of innovation--and not because of what the law actually says.

The U.S. Copyright Office is contorting decades of precedent to impose arbitrary rules on when and how creators seeking copyright protections can use AI tools in their work. If they don’t get with the times, the consequences won’t just be limited to artists like me.

In 2022, my painting Théâtre D’opéra Spatial won a Colorado State Fair award for digital art. I decided to register it with the USCO, even though I knew the office might shoot down the request because I produced it using Midjourney AI.

Jason M. Allen

President and CEO of Art Incarnate, a company specializing in luxury A.I. fine art prints. His creation “Théâtre d’Opéra Spatial” won first place in a Fine Art competition, sparking a global controversy about A.I. in art. 

That’s because the painting had already attracted massive controversy. Even though it was always labeled as an AI-assisted artwork, and the judges confirmed it was a fair decision, critics accused me of lying, art theft, and worse.

What I didn’t expect was for the USCO to write off the entire submission with a dismissive, one-page response: “We have decided that we cannot register this copyright claim because the deposit does not contain any human authorship.” It was hard to believe the nation’s copyright authority would issue a decision on such a consequential issue—who owns the rights to AI art?—with so little substance.

It wasn’t until Tamara Pester Schklar, my intellectual property lawyer, and I pushed back in a request for reconsideration that we received any elaboration. The USCO did admit one mistake. The extensive editing and modification I did to the original AI-generated work was in fact eligible for copyright registration. The USCO also indirectly confirmed that its decision was not based on a finding that AI-generated art infringes on others’ copyrights.

The AI cannot read my mind and steal my ideas for itself.

But it doubled down on denying a copyright to the entire work with an argument that, if accepted, would be disastrous: not only am I not the author of Théâtre D’opéra Spatial, it is “clear that it was Midjourney—not Mr. Allen—that produced this image.” 

That’s just not true. The painting wasn’t spit out by Midjourney at random, but reflected my artistic process of iterative testing, conceptualization, and refining outputs until I find an image that translates my vision into reality. But more importantly, each of those steps was the result of careful and deliberate instructions entered by a human. The AI cannot read my mind and steal my ideas for itself.

Courts have consistently updated their interpretations of copyright to reflect new types of technological innovation (and to challenge biases against them). For example, in 1884, the Supreme Court ruled against a company which had distributed unauthorized lithographs of photographer Napoleon Sarony’s portrait of Oscar Wilde, shooting down the firm’s claim that photography is a mere mechanical process not involving an author.


In other words, the court found Sarony’s camera was a tool through which the photographer translated his mental conception into an artistic work. AI generators like Midjourney might rely on complicated algorithms instead of light focused through a lens onto a film coated in light-sensitive chemicals, but it’s a difference of degree, not kind.

There is no incomprehensible magic going on; Midjourney’s servers don’t have a consciousness of their own. They run diffusion models calibrated using a training set of millions of other images. Diffusion models use deep learning to deconstruct data from the training set, add truly random Gaussian noise, and then attempt to reconstruct the original data. The end result is a model tuned to generate entirely original images.

As we told the USCO in our second request for reconsideration, tools like Midjourney are interactive systems that by definition require humans to code, train, and run them. They’re cameras with extra steps. To put it another way, the USCO is effectively arguing the camera is now so complicated it’s also the photographer!

U.S. copyright law is clear that non-humans cannot author or own the rights to art in any meaningful legal sense. Monkeys are indisputably more self-aware than a large language model. Yet when one picks up a Kodak in the jungle and clicks a button, it doesn’t get the rights to any accidental selfie.

Even the Copyright Office’s decision that my modifications to the raw image in Photoshop are copyrightable as a derivative work isn’t free of haphazard reasoning that should alarm any artist. While the USCO conceded human authorship of the edits, it found the “appreciable amount of AI-generated material” in Théâtre D’opéra Spatial required limiting the copyright to “exclude the AI-generated components of the work.”

But it hasn’t clarified what constitutes an “appreciable” degree of AI content, even though it seems to draw distinctions between an author’s creative input and the tools they use.

No one can make those decisions without interrogating a creator’s methods and processes. Copyright Act precedent clearly limits a court or reviewing agency’s review to perception of the final product, not how or why it was designed.

Théâtre d'Opéra Spatial - Jason M. Allen / Midjourney - The Colorado State Fair 2022

Do we really want the office applying subjective determinations as to what parts of the artistic process are worthy of protection? Consider the rise of AI-assisted features like Photoshop’s generative fill, which uses machine learning to create and modify elements of an image, and the mess that would result if the Copyright Office continued to apply the same capricious standards.

This isn’t theoretical. On similar grounds of non-human authorship, the USCO denied that I could copyright versions of the painting upscaled via Gigapixel AI, a tool that simply enhances pre-existing details in an image without introducing any original elements. Similar tools like Photoshop filters are already in widespread use. Any remaining contrast between non-AI and AI-powered features of imaging editing programs will inevitably disappear as their developers introduce more powerful features.

This fight is not about me, and it’s not just about art. Already, my company Art Incarnate has been forced to come up with creative solutions for releasing fine art prints, like taking photographs instead so they can be protected as derivative works. Other workarounds like open source licensing can only do so much.

and
or


Businesses and entrepreneurs are rapidly adopting generative AI in other fields ranging from entertainment and media to publishing. At some point, though, the novelty will wear off and they will be forced to confront practical commercial challenges. Without assurances that they will be able to safeguard their intellectual property, or that they won’t be forced to jump through IP loopholes to prevent competitors from copying their work, they may be hesitant to continue investing in AI technologies.

Nor is this fight, as critics argue, a vehicle to cheapen the output of traditional artists. We are fighting to extend ownership protections to all creatives who utilize AI in their work, helping keep their livelihoods intact.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

More

Good Artists Copy Great AIs Steal Recap

We Are Algorithm-ing Ourselves Into a Monoculture

Would You Let Netflix Read Your Mind?

Good Artists Copy, Great AIs Steal

How Artists Turn AI Into Gold

A Brush with AI: The Copyright Fight For Digital Creativity

Will AI Enhance or Destroy the Business of Media?

Matthew Chmiel
Head of Discourse

The largely agreeable July 28 virtual event sparked more introspection than discourse.

At our inaugural virtual event, we invited three guest experts to discuss the intersection of AI and digital media.

Nearly 30 of our ON_Discourse members joined us that day and were provoked by Dan Gardner and Michael Treff. I spoke to Dan before the event to get a sense of his goals for the discussion. “Everyone in our field expects AI to be a massive disruption, I was hoping to get past the hype and dig into the details.”

From my perspective, the discourse was light, almost too pleasant and agreeable to qualify as a deep discourse, but the good news is that the conversation isn’t over yet. Check out some of our key takeaways and let me know if anything you see here makes you want to push this further.

As always, the pull-quotes are not and will never be attributed.

Provocations Used

  • AI will destroy all digital distribution mode
  • Fake news is all news
  • LLMO (Large Language Model Optimization) is the new SEO

Discourse Rating

  • Agreeable — there were no obvious and direct disagreements in the group. Let’s turn up the heat next time! As always, send me a note and we’ll find a way to get your follow-up arguments into the mix.

Recap

This is going to destroy the
internet as we know it


Today’s internet is scraped and organized and ranked by Google. Those results are still, for the short term, listed as links that send users to owned and operated web pages. The introduction of chat-based interfaces, where LLMs process the information from the same web pages and then directly deliver that information, without links, is going to flip the paradigm and ruin the internet.  It will homogenize the sources of information in the eyes of audiences.

The key takeaway here is the irrelevance of owned and operated pages – pretty soon, it seems, the need for websites will fade away – as long as the information is available and presented in the chat query.

— Provocation – AI will destroy all digital distribution models

—— what did you think?

Or.....

It’s putting agency back
in the hands of the user

The evolution of prompting LLMs is a 3-way interaction between an individual and the LLM *and* with the community that is training the LLM. Publishers will have to think in new ways — they can’t just ship some content and stop thinking about it. The essence of this interaction model is that the content gets shaped by the prompts. This is empowering audiences in new ways. 

Forget that nonsense about irrelevant web pages, this new interface and behavior (prompting and querying) are going to reveal new needs among users that will deepen their relationship with the true sources of this information. The fact is, the distribution model for digital media has been fundamentally broken ever since Web 2. This might actually fix it!

— Provocation – AI will destroy all digital distribution models

—— what did you think?

But….

AI needs an attribution protocol

Several members and guests focused on a specific software requirement that will address questions of reliability and trust. The brands, voices, and processes that delivered the information must persist, in some way, in the final delivery of generated-information. In other words, if the LLM is scraping an article that comes from the New York Times, that brand and even that author must be clearly articulated in the answer. There is reason to believe this type of protocol can be administered because publishers have the leverage – LLMs need good source material to be relevant and this is putting the power back in the hands of the publishers. 

— Provocation – Fake news is all news

—— what did you think?

On the other hand…

IT SUCKS

The current state of audience trust in publisher platforms. Will LLMs really make it better? Anecdotal audience research shows that Apple News readers assume that Apple News is a news publication and that the articles they are consuming are coming from Apple News (not the brands and reporters that distribute from that platform). When you remove all links and bury attribution in generative text, you remove all chances of establishing a meaningful relationship between the audience and the brand that created it.

— Provocation – AI will destroy all digital distribution models

—— what did you think?

… but wait…

Young people are savvy about finding trustworthy sources in the new internet

They have a radar for authenticity which is the antidote to the synthetic content they’ve been receiving all their lives. They find authenticity from some brands, but mainly from influencers online. This isn’t always good though, because not all influencers are trust worthy. On top of all that, we don’t know if the AI itself is going to contain any bias that will influence its answers. 

— Provocation – Fake news is all news

—— what did you think?

… and…

AI is just software - just like everything else on the internet

SEO emerged as a discipline after some engineers reverse-engineered the ranking algorithm.  The same kind of reverse-engineering is going to happen with LLMs – even though they are exponentially more complex, it will eventually happen. And when it does happen, there will be more attributions embedded in LLM answers. This will drive up the potential costs that publishers can extract from LLMs because one of the key elements of an LLM – unlike the internet at large – “shitposting will not get you anywhere on ChatGPT.”

— Provocation – LLMO is the new SEO

—— what did you think?

… finally

The Web is undefeated

Many of our members and guests have been working in digital media for decades and they have read (or written) many pieces about the impending demise of the internet – and still it persists.

— Provocation – LLMO is the new SEO

—— what did you think?

Fediverse Deserves

Your Notice,

Even If

You’re Not

Using It

Ernie Smith
Editor of Tedium, a twice-weekly internet history newsletter, and a frequent contributor to Vice’s Motherboard.

Mastodon is known as a Twitter alternative for the technical, but its underlying protocol could help solve some of social media’s biggest headaches.

You might find it a bit uncomfortable to build your social presence on the digital frontier, which is perhaps why Meta’s Threads or Bluesky look a lot more attractive to brands than the fediverse—a loose connection of federated networks, most notably Mastodon, built on an open-source communications protocol called ActivityPub. Even LinkedIn might feel safer in comparison.

The social media pros I’ve talked to have suggested it feels like chaos and not in a good way. That probably explains why brands haven’t made the leap, unless their audiences are suitably technical. (My Linux jokes do really well there.)

Admittedly, even though I’m a fan, I’ll be the first to admit that the fediverse—the loose collection of connected servers that Mastodon and other ActivityPub-compatible services connect to—doesn’t have the fit and finish of a network built for mass consumption. Still, with a little help from Twitter and Reddit’s recent troubles, it has attracted roughly 10 million total users and nearly 2 million monthly active users, according to FediDB.

While that pales in comparison to the estimated 100 million users on Threads, That’s well above Bluesky’s 1 million users and Post.news’ 440,000 users. (Threads, of course, benefited from Instagram’s existing network effects.)

That may not be enough to move your needle. But ignoring the potential of ActivityPub entirely is a mistake because services like it often shape the corporate world. It could be a way to control your brand’s digital destiny.

Most social networks work like a hub-and-spoke model, where users tend to pull information from one centralized resource. Platforms on the fediverse use servers that could talk to anyone on the network.

The effect is similar to early social networks like GeoCities, which relied on interest-based communities. Mastodon ups the ante by encouraging robust local servers with distinct local timelines—but that can still talk to the outside world. If you’re into anime, you can join a server with a robust anime community—but you don’t always have to focus on anime. But the real power of the fediverse is that these servers can scale into something the size of Twitter, but rely on a network of individual hosts who share the server bill and resources.

This has benefits from a moderation standpoint. Dealing with trolls on other social networks is a bit of a crap shoot, but joining smaller servers on Mastodon or competing platforms like Firefish allows you to tailor your experience accordingly.

Prefer free speech over heavy moderation? You can join a server like that. Want to join a more closed-in community instead? You can join a server using software with tighter controls, like Hometown, or join a Reddit-style community hosted on Lemmy.

Just want to use something that feels like Twitter? Hop on a larger server, like Mastodon.social. And if the trolls show up, server mods can block both them and their server—and so can end users.

And if you find that one server isn’t a fit for you, you can take your identity and your followers—but not, in most cases, your posts—to a new server with you.All this can confuse folks used to centralized networks (i.e. most of us), but it’s a throwback to earlier internet distribution models. The fediverse model evokes Usenet, for example. Another common comparison is XMPP, an open-source protocol commonly used as an alternative to AOL Instant Messenger. And the account naming conventions closely follow the cues of email.

To put it all another way, ActivityPub is open architecture, like RSS. That kind of plumbing will become more valuable on the internet over time, even if it never reaches the scale of Twitter, X, or whatever Elon’s calling it today.

That puts it closer to IRC or old-school mailing lists, which are still used in many pockets, even if they aren’t quite mainstream. These old networks can still have commercial value—XMPP, once known as Jabber, has evolved into an essential communication protocol for the internet of things.

Unlike those, however, ActivityPub is riding a wave of developer buzz, which makes it likely that later generations of apps will support it natively. It has formal support from the World Wide Web Consortium. And it can plug into many mediums, making it possible to share content once and have it spread across many platforms, making it easier to support multiple networks with your content.

Admittedly, some challenges could dampen its uptake. While Meta is publicly interested in connecting Threads to Mastodon, many existing Mastodon users are understandably concerned that it might ruin the network—and there’s a chance that could scare Meta off. (Threads already has a huge user base, after all.) But others see this as a way to strengthen the value of existing networks—existing platforms like Tumblr, Flickr, Flipboard, and Medium are also interested in joining the fediverse.

If the fediverse does find a way to get past its cultural challenges with commercialism, it could solve a pair of problems that often slow down major brands online: building an audience and building trust.

Every time a new social network appears, time is often wasted convincing your biggest advocates to follow you on the hot new thing, with no guarantee that the social network will be anything other than a flash in the pan. There’s no guarantee any of these networks will be nearly as effective, which is why many brands have continued to focus on building traditional email lists.

Plus, there’s the whole factor of impersonation and verification, which Elon Musk has muddied the waters on, but remains a major problem for large companies. The fediverse has a much better solution than many proprietary networks: You could self-host your account and attach it to your domain, just like you might host a content hub, and have that account plug into the rest of the fediverse.

A social presence hosted on a first-party website could naturally carry a level of brand association that a checkmark next to a username might lack these days. (If you don’t want to use your domain, Mastodon also has a very easy form of self-verification that works quite well, no $8 surcharge needed.)

You could transfer your existing followers to another ActivityPub-compatible server, potentially speeding up ramp-up time to the next new thing.

Some examples of what this could look like are already in the wild—the European Commission, for example, has 85,000 followers on an account on its own dedicated server.

Suppose Threads and other networks plug into the fediverse. In that case, it’s not out of the realm of possibility that large companies might be able to build connections with their fans this way while avoiding some of the inherent risks that traditionally come with new networks.

The idea of “owning” your audience has been a somewhat foreign concept in the social media era. The fediverse could finally make it possible.

What does it mean to take your following with you?

It means you can move from one Fediverse server to another without losing the people who followed you on the other server, as long as those two servers don’t have restrictions from connecting. Most Fediverse servers can speak to each other, except in rare cases when they prefer not to.

What happens if your following isn't on the new server?

In those rare cases where a server denies a connection to another, and your followers are based on that server, you won’t be able to speak to them.

What would allow content to come with you or not?

While your followers can follow you to a new server, your posts will stay behind on your previous server but will stay accessible as long as that server stays online. As stated earlier, so long as those two servers can speak to each other, your content would remain accessible from the new server, even if you moved servers.

Can you join two servers? What happens if you are very interested in two servers?

You have to choose a home server, so you can only be primarily on one. You could still see content on that other server and they could see your content, as long as they allow each other to connect to each other.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

Good Artists Copy,

Great AIs Steal.

You are invited to:

Exploring the

forces of

disruption

in entertainment,

media and tech

August 9th, 6pm-10pm
Water Mill, Hamptons, NY

Set in a modern-styled barn nestled within the farm fields of Bridgehampton, ON_Entertainment invites you to participate in Good Artists Copy, Great AI Steals, an evening driven by deep provocations that serve to unlock new perspectives in the business of entertainment.

Entertainment executives, technology founders, artists, entrepreneurs, and investors will come together for a captivating evening of off-the-record conversations, enhanced by the smooth allure of Komos Tequila and the perfect accompaniment of caviar.

To top it all off, an exquisite dinner awaits, artfully paired with the delightful and organically farmed Avaline wines, courtesy of founders Cameron Diaz and Katherine Power. It promises to be an unforgettable gathering of visionaries, indulging in both fine company and delectable flavors.


RSVP to dharika@ondiscourse.com 

Please reach out for travel and accommodation recommendations if you need them.

WHAT IS

ON_Discourse is a new membership media company
focused on the business of technology, prioritizing
expert-driven discourse to drive perspectives.

WHO?

Designed for executives, brands, celebrities, entrepreneurs, and business leaders at the intersection of technology and entertainment

YOUR
HOSTS

Dan Gardner
Co-Founder & Exec Chair,
Code and Theory
Co-Founder, ON_Discourse
Toby Daniels
Co-Founder, ON_Discourse
Brandon Ralph
Founder, The Unquantifiable

With thanks

to our partners

Threads:

Who cares?

Dan Gardener
Co-Founder & Exec Chair,
Code and Theory
Co-Founder, ON_Discourse

All the talk about Threads would make you think it could be as potentially transformative as ChatGPT. 

The headlines read: “100 million sign-ups in 5 days.” Wow, this must be important! 

Podcasts are endlessly talking about it. Media trades can’t stop writing about it. Creative advertising agencies are already the experts on how to use it. Aren’t we lucky?

Let’s put the 100 million sign-ups into perspective. Justin Bieber still personally has more followers on Twitter than the entire Threads platform.

100 million is still a small percentage of total Instagram or even Facebook users despite the frictionless way you could sign up. If you advertised a free service to over 2 billion people, is 5% sign-up a success? Why is the media jumping all over this like it’s an indication of success or change in behavior?

And most importantly, why would you use it if you care about text-based social networks and already have a proven platform that delivers? Is a dislike for Elon Musk the main driving force for change? And so they go to Meta?!

WHAT A WASTE

OF TIME

Is this the new Apple vs. Microsoft passion war of the ‘80s and ‘90s? Doesn’t seem like it. At least with the Apple vs. Microsoft brand war there were product attributes or design tastes that were clearly different and better.

Threads has no clear behavioral-driven product experiences that are new or superior. YouTube popularized community-generated content, Facebook allowed you to connect in new ways, WhatsApp allowed you to communicate in new ways, Twitter allowed you to discover content in new ways, Snap created ephemeral communication and sharing, TikTok created a new mobile-first faster way to share and consume content, Pinterest allowed you to be inspired in a new way and even more recently ChatGTP allowed you to create in new ways. Threads have no clear proposition that is different.  

This feels like the media is talking to themselves. It’s meaningless to most people. 

And for advertisers, it’s meaningless to them as well. At least for now.

I can reach more people with Bieber or the Super Bowl if I wanted mass advertising with no targeting.

Will it pick up steam? Maybe. I’m not making a prediction either way. But right now it’s much ado about nothing. Just something to speculate and talk about. 

Big business is not doing anything differently because threads exists today, but the mainstream and trade media would have you feeling differently.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

You're invited to:

Will AI Enhance or Destroy the Business of Media?

July 28th, 2023 
Virtual Event

12:00 pm- 1:00 pm ET

As an ON_Discourse member, we invite you to join a discussion that aims to deepen our understanding of AI’s impact on the business of media.

We will be joined by former tech reporter for the New York Times and now columnist at the Washington Post, Taylor Lorenz, trust editor for the New York Times, Edmund Lee, and co-founder of Vox Media, Trei Brundrett.

Gain insights into the future of AI driven journalism, personalized content, disruptive distribution models, and the convergence of traditional and digital formats, that set the stage for a media revolution.


RSVP to dharika@ondiscourse.com

Special

Guests

Taylor Lorenz
Tech Columnist,
Washington Post
Trei Brundrett
Co-Founder & Senior Advisor,
Vox Media
Edmund Lee
Trust Editor,
The New York Times

Why
         Attend?

We are Surrounded
by Fake-experts
_______Lacking Depth
in their Thinking

Ideas in our Industry are
_______Trapped within
Conventional
Boundaries

The People
_______in our Industry
Often Think
_______the Same

Unintended Consequences
_______in Tech Lead to
Costly Mistakes
_______in Business

Perspective
              is Everything.

ON_Discourse is a new membership media company
focused on the business of technology, prioritizing
expert-driven discourse to drive perspectives.

AI is Not a Public Utility

Anthony DeRosa
Head of Content and Product,
ON_Discourse
or
Should Bored Apes
Be in Charge of AI?

There are a number of reasons why handing defacto ownership of AI to any one or two companies is not only unfeasible, but also a non-starter from both from a regulatory perspective and an antitrust perspective. Unfeasible because open source AI models are likely to allow those with modest means to run their own AI systems. The cost to entry gets lower by the day. There’s no way to put that genie back in a bottle. 

Charging someone to obtain an exorbitant “AI license” in the realm of several billion dollars seems clean and easy to execute, but how could it possibly be enforced? How would you find every AI system? What if they just ran the servers overseas?

Pushing AI innovation outside of the United States is exactly what adversaries would like to see happen. If you were to somehow find a way to limit the technology here at an infrastructure level, it would simply flourish in places with interests against our own. It would be a massive win for China if we were to abort an American-based AI innovation economy in the womb.

It’s not even clear what “controlling” AI would even mean since no one company can possibly own the means to utilize AI. It’s like trying to regulate the ability to use a calculator. You can’t put a limited number of companies in charge of “AI infrastructure” because there’s no reasonable way to limit someone’s ability to put the basic pieces in place to build one. 

Thinking of AI as a public utility is incoherent. The defining characteristic of a public utility is that the means of distribution, the delivery lines, would not benefit from having multiple parties build them. Unlike utilities, such as phone, power and water, there’s not a finite source for AI and a limited number of ways to deliver it. There’s many ways AI can be built for different purposes and having few companies doing so is not a common good. Making that comparison is a misunderstanding of what AI is and how it works.

Putting government controlled monopolies in charge of AI would create a conflict of interest for those parties, leading to among other things, a privacy nightmare and a likely Snowden-like event in our future that reveals a vast surveillance state.

One might argue that we should at least limit large scale AI infrastructure. As unworkable as that may seem, let’s interrogate that argument with the idea that Apple would “control” that business. Apple might have a solid record of protecting consumer privacy, pushing back on law enforcement and government requests to access phone data. That trust would be shattered once they were an extension of the U.S. government by way of their granted AI monopoly and their market dominance would likely plummet. It would be a bad deal not only for Apple but for consumers as well.

Some of the most potentially useful forms of AI is utilized by private LLMs, which would have more refined, domain-specific, accurate data within a smaller area of focus. Doctors, scientists, and other specialists benefit greatly from this bespoke form of AI. Putting AI in the hands of one or two large companies would stifle innovation in this area. For those reasons alone it’s unlikely to be considered on regulatory and antitrust grounds. 

If we want to put safety around AI, there's a better and more realistic way to approach it.

The best way to deal with AI risks is through reasonable regulation. Independent researchers can document the potential risks and laws can hold them to account. It can happen at both the state and federal levels. Several states are already forming legislation based on the “AI Bill of Rights” and other efforts are happening worldwide. Handing over control of AI to a few companies isn’t feasible, doesn’t make good business sense, and wouldn’t necessarily prevent the calamaties it was intended to mitigate. Instead we will need to be clear-eyed about the specific risks and meet them head-on, rather than expecting them to disappear because a few massive tech companies are in control of the technology.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

MORE
ON_AI