For

real

customer

insights,

Matt Chmiel

Head of Discourse

ask fake

people

This article is part of The Intelligently Artificial Issue, which combines two big stories in consumer tech: AI and CES.

Read more from the issue:

Are businesses even asking the right AI questions?

Should we ignore the hardware?

From the editor: Before the launch of the Intelligently Artificial Issue, we invited Peter Smart, the global CXO of Fantasy, to give a demo of a new AI-powered audience research tool the company calls Synthetic Humans. This article is a distillation of the discourse from that event.  

Digital product design does not happen in a vacuum. Designers, product owners, marketing teams, and business stakeholders all have extensive conversations with customers before, during, and after designs are ultimately shipped. This process is timely and expensive and it feeds a thriving user research industry; consumer brands pay a premium for access to real people from target audience segments to record reactions and develop concepts. The vendors and design teams then plot that feedback into thousands of slide deck pages across the land. The testers get paid, the vendor gets paid, the design staff gets approval, and the designs ultimately ship.

Here’s the thing about all of this testing: what if it’s fake? What if real people are the problem?

Real people are too human to be reliable. They lie, they cut corners, and their attention wanes. They’re in it for the money, which obscures their true opinions as they are not invested in the experience. They resist change with red-hot passion before they embrace and ultimately celebrate it. They are not useful testers.

The proliferation of user research as a design process is responsible for standardized and conventional design practices online. It is hard to produce a differentiated design when we try to meet people where they say they are.

Put bluntly, real people are a waste of time and money.

Can AI fix this?

Fantasy believes that the solution to this human problem of qualitative testing is to use AI to develop a new, scalable audience research ecosystem built on synthetic humans.

A synthetic human is a digital representation of a human being, built using an LLM that converts a massive amount of real survey data into a realistic representation of a human being. Think of it as a digital shell of a human cobbled together using thousands of psychographics data points.

Prompting a synthetic human should give you a realistic response. As a result, if you train a synthetic human to deliver feedback and reactions to developing ideas, you should get actionable audience data. These modern-day AI-generated avatars are much more powerful than a chatbot because they generate and sustain their own memories.

We are not talking about Alexa or Siri here. A synthetic human initiated with a preliminary dataset (age, demographics, location, income, job, and so on) can determine, without any other prompt, that “she” has two daughters, aged 5 and 3. These daughters have names and go to a certain school. Their teachers have names and each daughter has a favorite subject or cuddle toy.

If you don’t interact with this synthetic human for six months and then prompt “her” again, these daughters would still be in “her” mind, as would the teachers and the school. In the intervening time, the children might have celebrated a birthday, or entered the next grade, all aspects that get folded into the profile and leveraged for realistic responses. As a result, “her” opinions about your developing ideas can feel more reliable.

Organizations can train these humans to react to developing concepts, or brainstorm new concepts outright. They can also leverage their generative memory capabilities to help organizations overcome embedded workflow obstacles, like stubborn stakeholders.

Let’s say an organization knows that “Bob” in audience development has a reputation for capricious feedback that often causes a production bottleneck. The organization can train a synthetic human to brainstorm ways to overcome Bob’s reputation.

Here’s another example. Imagine prompting two contradictory synthetic humans (one is aggressive and the other is conservative) to collectively brainstorm an idea over the weekend so that you can arrive on Monday to a fresh batch of thinking. These two personalities are not just coming up with ideas; they are reacting to each other’s ideas, giving feedback, rejecting suggestions, and building on top of promising sparks.

What’s the catch?

There is always a catch. And at ON_Discourse, we lean into the questions that hide underneath the inspiring claims of innovative technology. There is no denying the potential of synthetic humans. It is a direct response to the biggest issues that plague the audience research industry today. Synthetic humans can stay focused, can offer candid feedback, and can be scaled to deliver deeper insights at a lower cost. These are good things. But there are gaps in the capabilities of these tools. Our virtual discourse on November 30 unpacked some of them and thus the limitations of synthetic humans for audience research.

Synthetic humans cannot predict the future. They are locked in the snow globe of their initial configuration. Their generated memories cannot incorporate the development of novel technology or cultural revolutions. As a result, we should not expect this kind of tool to unlock perspectives for new developments. This is notable, given that we are living in an era of rapid, unpredictable change. What humans think about specific disruptions will have to come from other sources.

Synthetic humans do not access deeply human emotional states. They do not grieve. They do not get irate. They do not get horny or goofy, and they do not long after something that is just out of reach. These powerful emotions provide the source material for some of our most inspiring technical and creative accomplishments. Our guests provoked this concept with real-world examples of powerful emotional moments. There are limits to what we can expect an avatar to create – we cannot prompt a bot to dig deeper. Synthetic humans are calibrated to maintain a level set of emotions.

The issues we explored regarding synthetic humans speak more to the role of audience research than to the capabilities of this tool. The collated test results that are plotted on slide decks represent an unintentional hand-off of creative thinking to the masses. Forward thinking organizations are going to recognize the value of synthetic research for solving the achievable problems they face in design and product development. And they will leave the big thinking to the people that still run their business with their head, heart, and with their real human teams.

  • Hosted in partnership with Stagwell, ON_CES is taking the discourse straight to the convention floor.
  • We’re going to dedicate our unique process to unpacking and distilling the bold exhibition claims that make this the world’s biggest consumer technology convention.
  • A central theme of this issue will be the promise and implications of AI in consumer tech. What do the products on display represent for short and long term consumer trends? How do we distinguish between artificial hype and intelligent opportunities?

READ MORE ABOUT
WHY ON_CES

SUBSCRIBE TO LEARN

MORE ABOUT ON_CES

APPLY FOR

MEMBERSHIP

INQUIRE ABOUT

ATTENDING

  • ON_CES will include the launch of The Intelligently Artificial Issue, which will provide deep analysis, plus provocation-driven discourse on the most urgent and important topics related to AI and business.

CES

Can

Be

Fixed

With

DISCOURSE

Toby Daniels

Toby Daniels

Founder, ON_Discourse, former Chief Innovation Officer, Adweek, Founder and Chair, Social Media Week

Our co-founder Toby Daniels is a veteran of CES and has taken over our CES planning meetings with hot takes from his ample experience from the show. We thought we should give him a pen to write a mini-confessional about the world’s biggest consumer tech conference.
—ON_D

CES is not new to me. I’ve been attending the event for over 15 years times, walking the crowded halls, networking in one event after the other, and have seen countless over-hyped tech unveilings. I have seen the curved TV screens and they are still not going to be a thing. I believe in the value of this event and yet, after all of this time with it, can confidently tell you how it can be fixed.

Executives who
report feeling
disoriented and
isolated

First the primary problem: CES is loud, confusing, crowded, and extremely lonely. I am not alone in making this diagnosis; I have had countless conversations with fellow convention goers and tech executives who report feeling disoriented and isolated (especially during loud, engaged networking events). This problem creates the conditions that lead to the second, most commonly understood issue with CES.

In this mode,
agreement is
chosen over
conflict, and
innovation is nothing but an empty vessel of conventional ideas.

The secondary problem of CES is groupthink. It is an echo chamber with familiar faces and conventional ideas wrapped in flashy tech.  In this mode, agreement is chosen over conflict, and innovation is nothing but an empty vessel of conventional ideas.

CES is often touted as “a beacon for leaders in business and technology,” where the future meets today’s realities. While this paints a picture of innovation and forward-thinking, it often masks the event’s superficial nature. CES, in all its glory, can sometimes be more about the display than the depth of conversation. We can change that.

True perspective, I’ve learned, comes from heated debates, uncomfortable questions, and the willingness to listen to opposing viewpoints.

The discipline
of discourse is a
forcing function that enables us to provoke, argue, challenge and listen.

This year we are bringing discourse and community to CES. 

The discipline of discourse is a forcing function that enables us to provoke, argue, challenge, and listen – not just to reply, but to understand and consider. It’s through these authentic engagements that we can break free from the cycle of redundancy and uncover truly groundbreaking ideas and new perspectives.

At CES this year the ON_Discourse team will provide an experience for its members that will serve as the singular reason to attend the show in January. We will deliver this in three ways:

An experience for its members that will serve as the singular reason to attend the show in January.

Curation:

  • A guided experience, including a kick-off briefing event, a discourse-driven tour of the convention floor, and invitations to a carefully curated list of events.

Connection:

  • Members will be organized into “Teams”, small groups who attend sessions together, join dinners, attend parties, and experience the event as a single unit.

Conversation:

  • The discipline of discourse is at the heart of everything we do. When applied to conversations at CES, we ensure that we follow the three pillars: Provoke, Listen, Change.

It’s not just
about the
technology; it’s
about the
intelligence
behind it.

As we move towards CES 2024, I feel a renewed sense of purpose. Our approach is different – we’re not just there to observe; we’re there to engage, to disrupt the status quo of conversations. We’re setting up to ensure our members experience CES not as a showcase of gadgets, but as a forum of intelligent, meaningful dialogue.

I am hopeful that with our concerted effort, this CES will mark a turning point. A shift from superficial tech displays to rich, meaningful exchanges of ideas and our next Issue, “Intelligently Artificial,”captures this essence perfectly. It’s not just about the technology; it’s about the intelligence behind it – the thoughts, the debates, the discourse.

Toby Daniels

Co-Founder, ON_Discourse

Want to be part of

Join our growing community of business leaders,
innovators and entrepreneurs to access new
perspectives, better decision making and more
meaningful relationships.

Applications for 2023 close November 30.

?

Do You

Even Know

What It

Means To be

Creative

Dan Gardner

Dan Gardner

Founder & Exec Chair of Code and Theory & Founder, ON_Discourse

AI has made me think a lot about creativity recently.
I co-founded and help operate a creative agency that
employs over a thousand people, all of whom identify
as “creative”.

Throughout my tenure as a leader in this creative agency, I am often asked, “When did you first see yourself as a creative?” The question always strikes me as odd, as I believe that I have been innately creative all my life. From a young age, I indulged in painting and drawing, eventually developing a fascination with photography – all pursuits traditionally deemed “creative”. When I reflect on those early days of education, and even what transpires with my children’s schools today, I recognize this fundamental premise.

Children who display skills in areas like drawing, painting, writing, or performing are typically labeled as “creative”. Schools, given sufficient resources, will nurture these abilities. Conversely, children who lack such skills are deemed “not creative” and steered towards the acquisition of more practical “non-creative” skills. This dichotomy perpetuates itself in the professional world, with creative agencies or even in entertainment industries distinctly separating the “creatives” from the rest. So, it’s understandable why creatives may fear new technology: their entire self-concept, built on their unique skills, feels threatened. 

In the context of AI, many have started to express apprehension, suggesting that this technology could undermine creativity. I hear many arguments against its use. Even the writers union strike has some restrictive usage of the technology to protect themselves.

It is my belief that creativity is a skill, but not a skill of the practitioner, instead a skill of thinking. AI becomes a tool to enable thinking in new and profound ways. Just like digital photography didn’t kill the discipline of photography because you can take photos on your iphone and now dodge and burn in Photoshop or use Instagram filters instead of a dark room, these new AI tools are new enablers to new kinds of creativity. But a creativity only a few people will be lucky enough to participate in. It will fundamentally change the way we think of early education and the role of creativity in business and entertainment.

This is the new reality that AI will force us to come to terms with; Not everyone is as creative as they thought. What people deemed as creative, the skill of doing something, is becoming a commodity.  That creativity will go from the 1% of people who think they are creative to a new reality of maybe just .01% of people who actually are creative. 

Just like traditional artists. Only a very select few get to make successful careers from making Art. There is no entitlement to the career. And just like many young adults who graduate from art schools every year but sadly cannot make a career out of their skillset, the same will be true with legacy degrees. 

Andy Warhol was famous for having an entire factory executing his concepts, but he was the brain behind each unique idea. Sadly, we can’t all be Andy Warhol.  And the new factory is AI, not people.

Demanding that the creative industries limit the use of AI is misguided. Not only is it virtually impossible to contain this technological advancement, but it’s also shortsighted. It’s akin to bookkeepers resisting Quickbooks due to fear of obsolescence, or coal miners protesting clean energy innovations. Imagine if we had halted the industrial revolution due to fears about job losses.

However, there is a counterargument. Creatives don’t protest against the actual innovation of AI; their objection lies in the idea that AI is a thief. It’s not about the automation of the tooling but the data AI steals from.

But as the saying goes…

Good Artists
Copy, Great
AIs Steal.

The advent of AI has shaken up traditional notions of creativity
and its value. Many fear that AI is essentially “stealing” creativity, a unique attribute that should be
fairly compensated. 

The belief is that if you use a creative output to generate something new the original idea needs to be fairly compensated. After all, if someone else profits from your original idea, shouldn’t you get a share? This seems reasonable and hard to dispute. 

But what makes creativity unique and therefore valuable? Picasso offered a profound answer when questioned about the worth of his art.

When told, “But You Did That in Thirty Seconds.”

Picasso replied, “No, It Has Taken Me Forty Years To Do That.”

This implies that the value lies in the journey, not just the end product. 

However, Picasso also famously said,

Good Artists Copy, Great Artists Steal 

He was implying that the best creativity actually is just old creative ideas revived in a new form. Does this mean no creation is entirely original but rather a derivative of something else? If so, what does “derivative” mean in the context of creativity and human cognition?

Does the person who creates a masterpiece in what seems like thirty seconds, with each stroke backed by their life’s experience, get full ownership of that new thing? In the age of AI, that’s just a nice little anecdote. Now we’ve got machines, bereft of any life experience, churning out ‘creative’ output at the speed of light.

For centuries, humans have patted themselves on the back for their ability to take historical ideas, toss them around, and present them as something ‘unique’ and ‘original’. Conveniently, we ignore where these ideas came from, lauding the result as claimable and unique. 

Consider Quentin Tarantino, who is hailed as a creative genius. He openly acknowledges the influences that shape his work. His creative process involves drawing from the past to mold his future ideas. Should he have paid the Film Noir greats some royalties because they influenced his style? If he entered a generative AI prompt instead, “in the style of Film Noir,” does all of a sudden that require a different payment for his creative influence?

Perhaps genuine creativity involves reshaping and recontextualizing historical ideas into something unrecognizable from the source. The human mind naturally (and sometimes consciously) does this. But as the origins are typically hidden, the end product is labeled as unique, inventive, original, and therefore, claimable. So, the creator retains a perpetual claim to any benefits derived from it.

But what happens when AI mimics this process of reshuffling and reconstructing ideas to produce something new? If the process is algorithmic rather than instinctual, does it strip the output of its creativity because its origins are more apparent? Is it not considered theft when a human does it, but when an AI does it, it is?

Wait… but…

What happens when AI mimics this process of reshuffling and reconstructing ideas to produce something new? If the process is algorithmic rather than instinctual, does it strip the output of its creativity because its origins are more apparent? Is it not considered theft when a human does it, but when an AI does it, it is?

Good Artists Copy, Great Artists AND AIs Steal. 

Could it be that Picasso was wrong? Or perhaps our understanding of creativity and ownership needs reconsideration?

In the end, the great AI invasion has forced us to reassess our holier-than-thou understanding of creativity and ownership. It’s high time we stopped hiding behind romantic notions and accepted that both human imagination and AI innovation are here to co-exist, whether we like it or not.

And this old romantic notion of creativity and ownership assumes that entertainment in the future will be like today. A model where someone thinks of an idea and the idea is executed and then distributed to the masses. Maybe tomorrow, it’s the viewer that dictates what will be created, not the creator. Is this the end of the creator economy?

What happens when AI mimics this process of reshuffling and reconstructing ideas to produce something new? If the process is algorithmic rather than instinctual, does it strip the output of its creativity because its origins are more apparent?

Is it not considered theft when a human does it, but when an AI does it, it is?

This is potentially the advent of a new form of entertainment…

The Future of
Entertainment:

Do we even know

what we’re

asking for?

It’s Anticipatory

and Ephemeral

What happens if only one prompt ever has to be said: “Give me more value”

Remember when being the first to come up with an idea was the big deal? Everyone hailing the “genius” who thought of something new? We used to think that creativity was something valuable and special, the thing that kept us entertained. 

For generations, we’ve held creativity on this high pedestal. We’ve marveled at the genius of the innovators, the artists, the disruptors. We’ve believed that creativity – the ability to generate something truly original and new – is a uniquely human trait. But let’s be real: do we even need that shiny, fresh-out-of-the-box creativity anymore? Or are we just craving the illusion of something “new and improved” to keep us entertained?

In the world of Web 2.0, personalization means leveraging location, device, and intent to tailor an experience. You could see it in Netflix’s recommendations, Amazon’s suggested products, or targeted ads. Of course, it’s all backed by those good old algorithmic engines that some may argue lack creativity. They’re just doing their job, providing value based on perceived needs.

But now, we’re venturing into a whole new territory. A world where we can prompt a system and voila – out comes something fresh, something creative. But what if these prompts weren’t manually entered? What if they were behaviorally driven, shifting and adapting to our ever-changing needs and desires? What if generative AI could whip up something personalized on the fly? It would be more than personalized, it’d be anticipatory.

Can our behaviors make us creative? Or are we just wandering in an ever-evolving maze of our own creations, no longer needing to come up with anything new? Or are we basically there now?

Imagine taking a scriptwriting class. You’d quickly become acquainted with the familiar pattern found in nearly every movie: protagonists, antagonists, story arcs, resolutions, and so forth. This formula can be identified in almost any story.

Consider Disney’s method of reusing animation, resulting in various movies with shared scenes, such as ‘Winnie the Pooh’ and ‘Jungle Book.’ Reflect on the choice to reshoot Samuel L. Jackson’s line in ‘Snakes on a Plane’ because of its anticipated impact on audiences. Many of the top hits on Fandango currently are sequels, remakes, or stories spun from established franchises. Or look at 2 recent TV hits, ‘Yellowstone’ and ‘Succession,’ weave strikingly similar tales, only tailored to different demographics through congruent themes.

So, what are we heading towards? A future where entertainment is tailored so precisely that it’s practically reading our minds and serving up ephemeral delights?

What does that mean for our requests in the future? Would we be reduced to uttering one prompt: “Give me more value”? 

With AI and LLMs, we’re entering an era of ‘machine creativity’. These systems can process and analyze massive amounts of data, draw from a vast pool of existing content, and generate responses tailored to individual user needs. They don’t just mimic human instinct; they go a step further by making data-driven decisions that can predict and cater to our needs with astounding accuracy.

Is there room for disruptive human creativity in this new landscape? Perhaps. But as LLMs continue to improve and evolve, these instances will become increasingly rare, and more importantly, they may not be necessary. After all, if a machine can fulfill our needs and desires based on our own behavior and preferences, do we need the occasional disruptive idea?

As we stand on the cusp of this new era, we may need to reassess our long-held notions about creativity. Is creativity really about originality, or is it about delivering value in the most effective and satisfying way possible? Is our pursuit of creativity overrated, particularly when AI systems are capable of delivering more value with greater efficiency?

Perhaps, in the end, we’ll find that ‘Give me more value’ is the most creative prompt we could ever ask for. It’s a directive that has the potential to render traditional creativity redundant, replacing it with a more accurate, efficient, and user-centric approach to satisfaction and fulfillment. And who knows? We may find that this approach fulfills us in ways we never thought possible.

As we navigate this transition, I’m not sure if any of this will be true, but one thing remains certain: the paradigms of entertainment and creativity are shifting. How content is crafted, delivered, and consumed might be starkly different in the future than it is today.

Can our behaviors make us creative? Or are we just wandering in an ever-evolving maze of our own creations, no longer needing to come up with anything new? Or are we basically there now?

Discover
More Discourse

USA

will

never win the

WORLD CUP

(unless the system changes)

Dennis Crowley

Technology entrepreneur working at the intersection of the real world & digital world. His work focuses on creating things that make everyday life feel a little more fun and playful.

What’s holding the United States back from becoming a soccer superpower like the rest of the world?

A few years ago, frustrated at seeing the US Men’s National Team (USMNT) struggle to qualify for the World Cup, I remember thinking to myself, “what can we, as fans, do to make sure the USMNT wins a World Cup in our lifetime?”

My experiences as the founder and chairman of Stockade FC (a semi-pro team in the Hudson Valley) and the co-founder of Street FC (“SoulCycle, but for pickup soccer”) have given me front-row seats to the shortcomings of our nation’s approach to the beautiful game.

First and foremost, here in the US, Major League Soccer (MLS) operates as a closed system. Teams pay exorbitant fees to join a top-flight league that never threatens relegation, while clubs in lower-level leagues are denied the opportunity for promotion, regardless of their performance on the field. This starkly contrasts with the open, merit-based systems seen in Europe and almost everywhere else in the world, which drives competition, growth, and investment (not to mention excitement and drama for fans worldwide).

The lack of a merit-based promotion and relegation system in the US stifles the hyper-competitive environment that is crucial for developing both top-tier talent and compelling narratives.

The lack of a merit-based promotion and relegation system in the US stifles the hyper-competitive environment that is crucial for developing both top-tier talent and compelling narratives. This has led to a US soccer ecosystem that hinders investment in both club infrastructure and youth development at the lower levels, which is vital for nurturing homegrown talent and growing fans of the game.

Why does this matter? In the absence of a hyper-competitive domestic league, we are failing to produce world-class players and attract the best talent from abroad while they’re still in their prime. It’s an open secret that top American players flee to European leagues as soon as they hit their teens, while the best players in the world look to wind down their careers in the MLS. The closed nature of our leagues has created a comfortable, risk-averse culture that is the antithesis to the spirit of the game worldwide.

Creating a European-style, open-league system in the US that benefits owners, fans, and players alike, would be challenging, but not impossible. We would need a vision, a plan, and a timeline from United States Soccer Federation (USSF) leadership. Unfortunately, there seems to be a reluctance to formally lay out such a plan, as it would disrupt the status quo (specifically, MLS owners who invested millions in their clubs, but who never “signed up” for relegation). In short, MLS investments have taken priority over creating a cohesive US soccer ecosystem with healthy lower-level leagues.

Meanwhile, in Europe, football folklore is fueled by the possibility of any club from any league achieving a meteoric rise through the ranks. These stories captivate fans and embody the very essence of sport—hope, ambition, and the reward for hard work. Unfortunately, the structure of the US Soccer ecosystem denies this opportunity and prevents these Cinderella-esque stories that all sports fans love (see: NCAA March Madness).

The current US system offers little incentive for soccer entrepreneurs to invest in the lower levels of domestic soccer.

The current US system offers little incentive for soccer entrepreneurs to invest in the lower levels of domestic soccer. With no “pot of gold” for club owners to chase in the US (such as revenue sharing from sponsorships and broadcast rights that come with promotion), the financial prospects on investments in lower-level clubs are bleak compared to the potential return on investments in foreign clubs, where even an obscure lower-level club can rise through the ranks and multiply in value. This is why you see the Ryan Reynolds of the world investing in lower-level soccer infrastructure abroad (in open systems), but not here in the US (our closed system).

I founded Stockade FC after asking myself the question “what can we do to help the USMNT win a World Cup in our lifetime?” My answer: “Support local soccer.” For me, this meant putting my entrepreneurial skills to work in creating a club from scratch in the Hudson Valley of New York and creating a blueprint for other clubs inspired to do the same. This has certainly made an impact – creating clubs, players, fans, inspiring youth, etc. – but not enough to move the needle on a national scale.

Do you
disagree or
have a
completely different perspective? We’d love to know:

editor@ondiscourse.com


For US Soccer to evolve, there are a dozen changes that need to be made – from creating an open system of promotion and relegation to making youth soccer more affordable, to making soccer more accessible in cities by converting basketball courts into dual-sport courts (spoiler: put a goal under that net!), to elevating the US Open Cup to the same level as the NCAA basketball tournament.

What’s next for soccer in the US? As much as I would love to see the change start from the bottom up (with the lower-level leagues self-organizing), I really think the most impactful change will come from a well-articulated vision of how to turn our closed system into an open system from the new leadership at US Soccer. The timing is right – the USSF has a new CEO and the World Cup will be hosted across 11 American cities in 2026. There is a palpable buzz around soccer in the US (thanks to everyone from Lionel Messi to Ted Lasso), but only if we channel this energy into transformative action can we hope to create a domestic soccer ecosystem as dynamic and exciting as those that thrill fans across the globe.

Can insurgent leagues capture market share from the NFL?

This topic kept coming up in our various events: the NFL is God. And God is immune to all the forces that are challenging the other incumbent leagues like the NBA and MLB. What makes the NFL so powerful? Is it a better TV experience? Is it a better sport? The rest of the world would argue against that. (And they probably want the word football back).


The NFL is built on initial scarcity. It started with two games broadcasted one day a week in the autumn. Then came Sunday Night Football, then Monday Night Football. Then Thursday Night Football followed that. Now we have Sunday Ticket and the Red Zone channel. All of that football turned into fantasy football leagues, online gambling. And all of that engagement is padded with endless expert analysis that fills in the gaps in between all the snaps. Is this ecosystem too strong to be disrupted?

This question unlocked a lot of thinking.

What does a league need to thrive? How can an old sport evolve and find new audiences? Can a team of insurgent leagues take down the mighty NFL?

Toby Daniels
Co-Founder,
ON_Discourse

Would YOU Let

Netflix

Read

Your Mind?

Search and discovery will be replaced by AI, which will anticipate your needs and provide hyper-personalized entertainment experiences.

Humans have a thirst for convenience and personalization. We want quick, tailored content across many platforms, such as streaming music, TV, and film. We hate choice and being made to think. In the future, our entertainment preferences will not just be catered to but anticipated by AI, rendering search, discovery, and even the prompting of AI chatbots obsolete.

TL;DR:

  • The future of entertainment may be defined by Anticipation On-Demand Entertainment, where AI will anticipate and deliver personalized content based on consumer behavior and preferences.
  • AI’s ability to process massive amounts of data in real time allows it to learn from and predict user behavior, taking personalization to new levels.
  • In this future, AI could automatically select music or shows aligning with the user’s current mood, conversations, or global trends, without any manual intervention.
  • This highly tailored entertainment experience could transform the way we engage with platforms, making entertainment consumption more efficient and immersive.
  • Anticipatory AI might also influence content creation, with shows being created on-the-spot based on user’s unmet needs and desires.
  • With continuous improvements in machine learning algorithms and UI design, the accuracy of such anticipation is expected to increase.
  • This could potentially lead to the unbundling of content, raising questions about the need for platforms like Netflix or Hulu.

We call this Anticipation On-Demand Entertainment. Catchy right?

Humans have a thirst for convenience and personalization. We want quick, tailored content across many platforms, such as streaming music, TV, and film. We hate choice and being made to think. In the future, our entertainment preferences will not just be catered to but anticipated by AI, rendering search, discovery, and even the prompting of AI chatbots obsolete. 

AI’s ascent in entertainment personalization is not surprising, considering its inherent ability to process vast data volumes and generate real-time insights. AI uses sophisticated algorithms to analyze consumer behavior patterns, learn from them, and then make data-driven predictions about what a user might like next. We’ve seen the early stages in algorithms that suggest songs, films, or series based on the user’s past preferences. However, the concept of Anticipation On-Demand Entertainment takes it to a whole new level, and this will be just the beginning of something even more profoundly important.

Anticipation On-Demand imagines a future where AI knows what we want to consume before we even realize it ourselves. Imagine coming home and there’s a new show in your queue that doesn’t just align with your taste and preferences but has been generated according to your current mental state or even your subconscious.

The same can be applied to music. Based on your historical listening patterns, the AI might anticipate that you need a lively playlist to kickstart your Monday morning or soothing instrumental music to help you focus on work. A potential scenario is your AI seamlessly changing the music in the background as you transition from a morning jog to a work-from-home setup, catering to your fluctuating moods and activities without requiring manual intervention.

This level of automated personalization will change our engagement with entertainment platforms, making the experience more immersive and effortless. Users no longer have to spend time searching or discovering new content; instead, the AI does it for them. The user experience becomes more streamlined, and entertainment consumption becomes more efficient. The advantage is twofold; while consumers receive a hyper-personalized experience, entertainment providers can increase customer satisfaction and engagement.

But why will we even need these providers? Could this result in another unbundling of content as the need for Netflix or Hulu becomes obsolete? 

Moreover, with the anticipated improvements in machine learning algorithms and user interface design, the accuracy and timeliness of such anticipation will only increase. The AI will continuously learn from our changing preferences, making real-time adjustments to deliver content that aligns with our current mood, recent conversations, or global trends.

We need to think beyond curation, though, Anticipatory AI will also work its way into the creation process. Your shows will be created for you, on demand and in real time. The story plays out according to your unmet needs and desires. The next scene you watch hasn’t even been created yet.

What does this mean for format and length? What is an episode or a series even in this new reality? Why would a show need to end?  

The possibilities offered by Anticipation On-Demand Entertainment are exciting. AI, with its ability to analyze, learn, and predict, can bring unprecedented personalization to our entertainment experiences. It can transform ‘on-demand’ from simply ‘what we want’ to ‘what we want before we know we want it.’ It is the dawn of a new era where AI doesn’t just respond to our demands but anticipates them, making our digital experiences smoother, more efficient, and truly personalized.

TL;DR: takeaways:

  • AI’s evolution may lead to Anticipation On-Demand Entertainment, where your preferences are predicted and met even before you realize them.
  • This advanced level of personalization could redefine our interaction with entertainment platforms, making the user experience more immersive and effortless.
  • Anticipatory AI could also influence content creation, leading to shows tailored to the viewer’s current mood, desires, or subconscious thoughts.
  • As machine learning algorithms and UI design continue to improve, anticipatory AI’s precision and relevance will increase.
  • This evolution might lead to the unbundling of content, questioning the need for existing entertainment platforms like Netflix or Hulu.

OR;

  • While Anticipation On-Demand Entertainment seems revolutionary, it raises serious concerns about privacy, as AI would need to access extensive personal data to make accurate predictions.
  • Over reliance on AI might eliminate the joy of discovery, reducing the opportunity for users to explore diverse content beyond their regular preferences.
  • The idea of AI creating shows in real-time raises questions about the quality and creativity of such content compared to human-produced content.
  • Not all users may appreciate such a high level of personalization, as it could make their experiences feel manipulated or artificial.
  • The argument assumes continuous improvements in AI algorithms and UI designs. However, technological progress might face obstacles, slowing the pace of such evolution.
  • The possibility of unbundling content also brings up issues related to copyright laws and how artists would be compensated for their work.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

Event Preview:

Toby Daniels
Toby Daniels
Founder, ON_Discourse

On Wednesday, August 9, ON_Discourse hosts ‘Good Artists Copy, Great AIs Steal,’ a private dinner for Premier Members and specially invited guests in the entertainment, media, and tech industries.

We chose entertainment as our subject because we are at a critical juncture. 

AI’s potential impact in entertainment is just the tip of the spear. It has swept through almost every industry, and this year we have seen these new powerful tools begin to redefine how we think about creativity, ownership and attribution and distribution. Even the economics of the business are being completely reexamined. 

At ON_Discourse, we believe new perspectives can only be found through discourse and disagreement and our mission is to build a new kind of media company, built on this idea and focussed on providing an exceptional level of value to our members through perspective, not just content, and through relationships, not just connections.

In the coming days we will publish a series of articles that build upon the Good Artists Copy, Great AIs Steal theme and explore the topics from a number of different perspectives. We will also announce future member events in the coming weeks.

Do You

Even Know

What

It Means

To be↓↓↓

Dan Gardner
Co-Founder & Exec Chair, Code and Theory
Co-Founder, ON_Discourse

Creativity isn’t the ability of the skill, it’s the process of thinking before doing it.

I get it, AI makes a creative person feel uncomfortable. Facing this intersection of creativity and artificial intelligence (AI) might cause a ripple of discomfort, particularly if you’re someone who has dedicated their life to honing creative abilities. It’s understandably disconcerting to contemplate the idea of an AI system challenging your unique capacity for creativity – a quality you’ve always attributed to your personal skillset.

CREATIVE

AI has made me think a lot about creativity recently. I co-founded and help operate a creative agency that employs over a thousand people, all of whom identify as “creative”. Throughout my tenure as a leader in this creative agency, I am often asked, “When did you first see yourself as a creative?” The question always strikes me as odd, as I believe that I have been innately creative all my life. From a young age, I indulged in painting and drawing, eventually developing a fascination with photography – all pursuits traditionally deemed “creative”. When I reflect on those early days of education, and even what transpires with my children’s schools today, I recognize this fundamental premise.

Children who display skills in areas like drawing, painting, writing, or performing are typically labeled as “creative”. Schools, given sufficient resources, will nurture these abilities. Conversely, children who lack such skills are deemed “not creative” and steered towards the acquisition of more practical “non-creative” skills. 

This dichotomy perpetuates itself in the professional world, with creative agencies or even in entertainment industries distinctly separating the “creatives” from the rest. So, it’s understandable why creatives may fear new technology: their entire self-concept, built on their unique skills, feels threatened.  

In the context of AI, many have started to express apprehension, suggesting that this technology could undermine creativity. I hear many arguments against its use. Even the writers union strike has some restrictive usage of the technology to protect themselves.

It is my belief that creativity is a skill, but not a skill of the practitioner, instead a skill of thinking. AI becomes a tool to enable thinking in new and profound ways. Just like digital photography didn’t kill the discipline of photography because you can take photos on your iphone and now dodge and burn in Photoshop or use Instagram filters instead of a dark room, these new AI tools are new enablers to new kinds of creativity. But a creativity only a few people will be lucky enough to participate in. It will fundamentally change the way we think of early education and the role of creativity in business and entertainment.

This is the new reality that AI will force us to come to terms with; Not everyone is as creative as they thought. What people deemed as creative, the skill of doing something, is becoming a commodity.  That creativity will go from the 1% of people who think they are creative to a new reality of maybe just .01% of people who actually are creative. 

Just like traditional artists. Only a very select few get to make successful careers from making Art. There is no entitlement to the career. And just like many young adults who graduate from art schools every year but sadly cannot make a career out of their skillset, the same will be true with legacy degrees. 

Andy Warhol was famous for having an entire factory executing his concepts, but he was the brain behind each unique idea. Sadly, we can’t all be Andy Warhol.  And the new factory is AI, not people.

Demanding that the creative industries limit the use of AI is misguided. Not only is it virtually impossible to contain this technological advancement, but it’s also shortsighted. It’s akin to bookkeepers resisting Quickbooks due to fear of obsolescence, or coal miners protesting clean energy innovations. Imagine if we had halted the industrial revolution due to fears about job losses.

However, there is a counterargument. Creatives don’t protest against the actual innovation of AI; their objection lies in the idea that AI is a thief. It’s not about the automation of the tooling but the data AI steals from.

But as the saying goes…

The advent of AI has shaken up traditional notions of creativity and its value. Many fear that AI is essentially “stealing” creativity, a unique attribute that should be fairly compensated. The belief is that if you use a creative output to generate something new the original idea needs to be fairly compensated. After all, if someone else profits from your original idea, shouldn’t you get a share? This seems reasonable and hard to dispute. 

But what makes creativity unique and therefore valuable? Picasso offered a profound answer when questioned about the worth of his art.

When told, “But You Did That in Thirty Seconds.”

Picasso replied, “No, It Has Taken Me Forty Years To Do That.”

This implies that the value lies in the journey, not just the end product. 

However, Picasso also famously said,

Good Artists Copy, Great Artists Steal

He was implying that the best creativity actually is just old creative ideas revived in a new form. Does this mean no creation is entirely original but rather a derivative of something else? If so, what does “derivative” mean in the context of creativity and human cognition?

Does the person who creates a masterpiece in what seems like thirty seconds, with each stroke backed by their life’s experience, get full ownership of that new thing? In the age of AI, that’s just a nice little anecdote. Now we’ve got machines, bereft of any life experience, churning out ‘creative’ output at the speed of light.

For centuries, humans have patted themselves on the back for their ability to take historical ideas, toss them around, and present them as something ‘unique’ and ‘original’. Conveniently, we ignore where these ideas came from, lauding the result as claimable and unique. 

Consider Quentin Tarantino, who is hailed as a creative genius. He openly acknowledges the influences that shape his work. His creative process involves drawing from the past to mold his future ideas. Should he have paid the Film Noir greats some royalties because they influenced his style? If he entered a generative AI prompt instead, “in the style of Film Noir,” does all of a sudden that require a different payment for his creative influence?

Perhaps genuine creativity involves reshaping and recontextualizing historical ideas into something unrecognizable from the source. The human mind naturally (and sometimes consciously) does this. But as the origins are typically hidden, the end product is labeled as unique, inventive, original, and therefore, claimable. So, the creator retains a perpetual claim to any benefits derived from it.

But what happens when AI mimics this process of reshuffling and reconstructing ideas to produce something new? If the process is algorithmic rather than instinctual, does it strip the output of its creativity because its origins are more apparent? Is it not considered theft when a human does it, but when an AI does it, it is?

Wait… but…

What happens when AI mimics this process of reshuffling and reconstructing ideas to produce something new? If the process is algorithmic rather than instinctual, does it strip the output of its creativity because its origins are more apparent? Is it not considered theft when a human does it, but when an AI does it, it is?

Good Artists Copy, Great Artists AND AIs Steal.

Could it be that Picasso was wrong? Or perhaps our understanding of creativity and ownership needs reconsideration?

In the end, the great AI invasion has forced us to reassess our holier-than-thou understanding of creativity and ownership. It’s high time we stopped hiding behind romantic notions and accepted that both human imagination and AI innovation are here to co-exist, whether we like it or not.

And this old romantic notion of creativity and ownership assumes that entertainment in the future will be like today. A model where someone thinks of an idea and the idea is executed and then distributed to the masses. Maybe tomorrow, it’s the viewer that dictates what will be created, not the creator. Is this the end of the creator economy?

This is potentially the advent of a new form of entertainment…


The Future of

Entertainment:

Do we even know what we’re asking for?

It’s

Anticipatory

and

Ephemeral

What happens if only one prompt ever has to be said: “Give me more value”

Remember when being the first to come up with an idea was the big deal? Everyone hailing the “genius” who thought of something new? We used to think that creativity was something valuable and special, the thing that kept us entertained. 

For generations, we’ve held creativity on this high pedestal. We’ve marveled at the genius of the innovators, the artists, the disruptors. We’ve believed that creativity – the ability to generate something truly original and new – is a uniquely human trait. But let’s be real: do we even need that shiny, fresh-out-of-the-box creativity anymore? Or are we just craving the illusion of something “new and improved” to keep us entertained?

In the world of Web 2.0, personalization means leveraging location, device, and intent to tailor an experience. You could see it in Netflix’s recommendations, Amazon’s suggested products, or targeted ads. Of course, it’s all backed by those good old algorithmic engines that some may argue lack creativity. They’re just doing their job, providing value based on perceived needs.

But now, we’re venturing into a whole new territory. A world where we can prompt a system and voila – out comes something fresh, something creative. But what if these prompts weren’t manually entered? What if they were behaviorally driven, shifting and adapting to our ever-changing needs and desires? What if generative AI could whip up something personalized on the fly? It would be more than personalized, it’d be anticipatory.

Can our behaviors make us creative? Or are we just wandering in an ever-evolving maze of our own creations, no longer needing to come up with anything new? Or are we basically there now?

Imagine taking a scriptwriting class. You’d quickly become acquainted with the familiar pattern found in nearly every movie: protagonists, antagonists, story arcs, resolutions, and so forth. This formula can be identified in almost any story.

Consider Disney’s method of reusing animation, resulting in various movies with shared scenes, such as ‘Winnie the Pooh’ and ‘Jungle Book.’ Reflect on the choice to reshoot Samuel L. Jackson’s line in ‘Snakes on a Plane’ because of its anticipated impact on audiences. Many of the top hits on Fandango currently are sequels, remakes, or stories spun from established franchises. Or look at 2 recent TV hits, ‘Yellowstone’ and ‘Succession,’ weave strikingly similar tales, only tailored to different demographics through congruent themes.

So, what are we heading towards? A future where entertainment is tailored so precisely that it’s practically reading our minds and serving up ephemeral delights?

What does that mean for our requests in the future? Would we be reduced to uttering one prompt: “Give me more value”? 

With AI and LLMs, we’re entering an era of ‘machine creativity’. These systems can process and analyze massive amounts of data, draw from a vast pool of existing content, and generate responses tailored to individual user needs. They don’t just mimic human instinct; they go a step further by making data-driven decisions that can predict and cater to our needs with astounding accuracy.

Is there room for disruptive human creativity in this new landscape? Perhaps. But as LLMs continue to improve and evolve, these instances will become increasingly rare, and more importantly, they may not be necessary. After all, if a machine can fulfill our needs and desires based on our own behavior and preferences, do we need the occasional disruptive idea?

As we stand on the cusp of this new era, we may need to reassess our long-held notions about creativity. Is creativity really about originality, or is it about delivering value in the most effective and satisfying way possible? Is our pursuit of creativity overrated, particularly when AI systems are capable of delivering more value with greater efficiency?

Perhaps, in the end, we’ll find that ‘Give me more value’ is the most creative prompt we could ever ask for. It’s a directive that has the potential to render traditional creativity redundant, replacing it with a more accurate, efficient, and user-centric approach to satisfaction and fulfillment. And who knows? We may find that this approach fulfills us in ways we never thought possible.

As we navigate this transition, I’m not sure if any of this will be true, but one thing remains certain: the paradigms of entertainment and creativity are shifting. How content is crafted, delivered, and consumed might be starkly different in the future than it is today.

 

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

HOW ARTISTS
TURN AI 
INTO GOLD 

Instead of fighting AI,
turn it into an
artist’s marketplace.

Stephen Parker
Creative Director at Waymark, a Detroit-based video creation company

Generative AI has sent a shockwave through Hollywood as creatives and studios debate its impact on the entertainment industry.

For studio owners and AI evangelists, generative AI has the potential to be an industry-disrupting tool for streamlining creative processes and getting projects out the door faster than ever previously possible. Many creatives and artists, however, argue AI technologies pose an existential threat to their livelihoods. 

AI will have far-reaching impacts on creative markets as AI systems like DALL-E 2 and Midjourney refine their products and advance their models, laying the groundwork for new AI systems and applications we’ve yet to fully realize.

While none of us has a clear picture of what this will look like for the entertainment industry, it’s evident a new model for protecting IP is needed to ensure creators can continue benefiting from their human output alongside this emerging innovation.

AI licensing could offer a solution.

Generative AI in entertainment is an industry-defining debate with far-reaching implications that could impact everything from contract negotiations to the way we interpret original art in the future.

Already, there are several lawsuits underway alleging that generative AI systems were unlawfully trained using various authors’ work without their permission. Depending on how generative AI systems are regulated in the U.S. or abroad, we may have a better idea of whether those lawsuits hold water.

In the meantime, creatives have more power than they may realize in the current Wild West of emerging generative AI technologies — perhaps even through a platform that supports continued or alternate world-building for their existing stories and projects.

An untapped wealth of IP

Plenty of creatives already allow their personalities, concepts, and writing to be licensed through brands and ad partnerships (Ryan Reynolds is one such ubiquitous celebrity who excels at image marketing).

These deals are executed under very specific contractual terms beneficial to both parties, and most seasoned actors have a considerable degree of say about what they do on camera. So why should it work any differently with AI?

The debate over generative AI’s use in entertainment production arrives at a poignant moment for the industry; the Writers Guild of America strike began in May and a strike by the actors’ union SAG-AFTRA followed shortly thereafter in July.

And make no mistake: Studios are already AI future-focused. SAG-AFTRA maintains that studios wish to use actor likenesses at will and without compensating those individuals (the Alliance of Motion Picture and Television Producers denies this characterization of its AI proposal). 

I’m not pitching a dystopian “Joan Is Awful” scenario wherein artists are exploited and their digital avatars are able to be used by content-greedy streamers or studios at will, as SAG-AFTRA has alleged was proposed by AMPTP.

But should a screenwriter, actor, or director be looking for new opportunities to capitalize on their oeuvre, they’d be wise to consider an AI framework for licensing. 

I’m proposing a reasonably straightforward concept: a hybrid creative-licensing platform that allows artists to license their likeness, aesthetic, or concepts for use as input for outside projects, creating an ethical — and importantly, lucrative — funnel by which all parties benefit and no copyright infringement or plagiarism of existing works occurs.

Part of the reason artists are so miffed with systems like ChatGPT and Midjourney is they are alleged to have used work and ideas without the express permission of the original authors.

But imagine that an author did allow their ideas to be used as part of a new and entirely unique AI output, with terms protecting the degree of use and how the output can be shared, for how long, and with explicit distribution royalties in place to compensate original authors whose work was used as a prompt. 

One way that this could potentially manifest is through a platform that allows brands, studios, actors, and writers to license their creative output to niche communities like fandoms.

With years between iterative games, books, TV shows, and film series, such a platform would find tremendous success with communities of artists and creatives interested in worldbuilding for purposes outside of commercial marketing.

Harry Potter fans, for instance, would almost certainly pay to expand narratives for their favorite characters, creating entire backstories, sub-narratives, and alternative timelines.

And should such a platform bill itself as a creative community rather than one meant to generate monetizable output, the licensing would be far more appealing to artists looking to capitalize on specific fan followings.

Make no mistake, such a model could be incredibly profitable for all involved parties, including media companies, established industry heavyweights, and emerging creators with significant social followings — specifically where it relates to world-building within existing franchising.

An AI licensing platform could easily filter for specific motifs or genres, and sophisticated terms could provide expansive protections to any creative willing to license their work or likeness through the service.

I envision a platform capable of filtering by specific tags, allowing paying subscribers access to rich data sets from their favorite artists and creatives.  

There are several benefits to third-party licensing that could even the playing field for the entertainment industry at large. For one, actors, screenwriters, and directors could set their own terms and decide how, and for what, their work is legally permitted to be used as input.

Studios with over a century of IP collecting dust, like Disney, could potentially license that material to allow a new generation of creatives to make it fresh again. Or maybe even your normal, everyday franchise enthusiasts could potentially use the service to generate a final season of a canceled show they adored. The possibilities for use are virtually limitless.

Why pay for what’s already free?

While conversations about how we navigate our AI future are of paramount importance, it’s also true that generative AI systems are still very much in their infancy.

You have to be fairly technically adept today to create something of high quality with the current suite of widely available generative AI tools. Even short-form projects developed with AI technologies require a lot of work, a lot of training, and a lot of frameworks both technical and conceptual. 

AI technology for developing high-caliber moviemaking is simply not at a plug-and-play stage. And the easier and more accessible the tool, the greater the potential for the product to generate money.

Everybody has a niche, and when they’re able to dial into that niche, that’s when the slot machine starts to ping. 

If you’re wondering why people wouldn’t just use a DALL-2-type model to do this on their own, that’s easy. The quality of the AI output would be lightyears better and more specific for artists and creators who are interested in working within a specific artistic motif.

If you’re licensing from the creators themselves, you’re going to get a richer body of prompts because you won’t have to sift through a randomized heap of garbage pulled from the internet — which can (and does!) lead to very bizarre interpretations of written prompts by generative AI models.  

Generative AI is still very much an emerging and primitive technology where video is involved, and they’re unlikely to soon entirely replace human beings in the ideation, development, and output of fully realized creative works (e.g. short films or scripts that don’t require a significant degree of human input).

We’ve also yet to see how copyright protections will work once policymakers begin regulating AI. A number of policy proposals may require generative AI models to disclose how they’ve trained their systems, again raising questions about copyright infringement and how creative works can or cannot be used in the generation of AI output.

At the same time, a creative-licensing platform like the one I’ve proposed may offer a solution from which everyone benefits. And most importantly, it gives back to artists and creators the creative control over their intellectual property while they’re laughing all the way to the bank.

Discover More
Discourse

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

A Brush

With AI

The Copyright

Fight

For Digital

Creativity

The U.S. Copyright Office
needs to adapt to the AI Future.

Generative AI presents a tremendous opportunity to unlock human creativity in ways that never could have been imagined when the concept of copyright emerged.

But the U.S. copyright regime has become an obstacle to this new era of innovation–and not because of what the law actually says.

The U.S. Copyright Office is contorting decades of precedent to impose arbitrary rules on when and how creators seeking copyright protections can use AI tools in their work. If they don’t get with the times, the consequences won’t just be limited to artists like me.

In 2022, my painting Théâtre D’opéra Spatial won a Colorado State Fair award for digital art. I decided to register it with the USCO, even though I knew the office might shoot down the request because I produced it using Midjourney AI.

Jason Allen

Jason M. Allen

President and CEO of Art Incarnate, a company specializing in luxury A.I. fine art prints. His creation “Théâtre d’Opéra Spatial” won first place in a Fine Art competition, sparking a global controversy about A.I. in art.

That’s because the painting had already attracted massive controversy. Even though it was always labeled as an AI-assisted artwork, and the judges confirmed it was a fair decision, critics accused me of lying, art theft, and worse.

What I didn’t expect was for the USCO to write off the entire submission with a dismissive, one-page response: “We have decided that we cannot register this copyright claim because the deposit does not contain any human authorship.” It was hard to believe the nation’s copyright authority would issue a decision on such a consequential issue—who owns the rights to AI art?—with so little substance.

It wasn’t until Tamara Pester Schklar, my intellectual property lawyer, and I pushed back in a request for reconsideration that we received any elaboration. The USCO did admit one mistake. The extensive editing and modification I did to the original AI-generated work was in fact eligible for copyright registration. The USCO also indirectly confirmed that its decision was not based on a finding that AI-generated art infringes on others’ copyrights.

The AI cannot read my mind and steal my ideas for itself.

But it doubled down on denying a copyright to the entire work with an argument that, if accepted, would be disastrous: not only am I not the author of Théâtre D’opéra Spatial, it is “clear that it was Midjourney—not Mr. Allen—that produced this image.” 

That’s just not true. The painting wasn’t spit out by Midjourney at random, but reflected my artistic process of iterative testing, conceptualization, and refining outputs until I find an image that translates my vision into reality. But more importantly, each of those steps was the result of careful and deliberate instructions entered by a human. The AI cannot read my mind and steal my ideas for itself.

Courts have consistently updated their interpretations of copyright to reflect new types of technological innovation (and to challenge biases against them). For example, in 1884, the Supreme Court ruled against a company which had distributed unauthorized lithographs of photographer Napoleon Sarony’s portrait of Oscar Wilde, shooting down the firm’s claim that photography is a mere mechanical process not involving an author.


In other words, the court found Sarony’s camera was a tool through which the photographer translated his mental conception into an artistic work. AI generators like Midjourney might rely on complicated algorithms instead of light focused through a lens onto a film coated in light-sensitive chemicals, but it’s a difference of degree, not kind.

There is no incomprehensible magic going on; Midjourney’s servers don’t have a consciousness of their own. They run diffusion models calibrated using a training set of millions of other images. Diffusion models use deep learning to deconstruct data from the training set, add truly random Gaussian noise, and then attempt to reconstruct the original data. The end result is a model tuned to generate entirely original images.

As we told the USCO in our second request for reconsideration, tools like Midjourney are interactive systems that by definition require humans to code, train, and run them. They’re cameras with extra steps. To put it another way, the USCO is effectively arguing the camera is now so complicated it’s also the photographer!

U.S. copyright law is clear that non-humans cannot author or own the rights to art in any meaningful legal sense. Monkeys are indisputably more self-aware than a large language model. Yet when one picks up a Kodak in the jungle and clicks a button, it doesn’t get the rights to any accidental selfie.

Even the Copyright Office’s decision that my modifications to the raw image in Photoshop are copyrightable as a derivative work isn’t free of haphazard reasoning that should alarm any artist. While the USCO conceded human authorship of the edits, it found the “appreciable amount of AI-generated material” in Théâtre D’opéra Spatial required limiting the copyright to “exclude the AI-generated components of the work.”

But it hasn’t clarified what constitutes an “appreciable” degree of AI content, even though it seems to draw distinctions between an author’s creative input and the tools they use.

No one can make those decisions without interrogating a creator’s methods and processes. Copyright Act precedent clearly limits a court or reviewing agency’s review to perception of the final product, not how or why it was designed.

Théâtre d’Opéra Spatial – Jason M. Allen / Midjourney – The Colorado State Fair 2022

Do we really want the office applying subjective determinations as to what parts of the artistic process are worthy of protection? Consider the rise of AI-assisted features like Photoshop’s generative fill, which uses machine learning to create and modify elements of an image, and the mess that would result if the Copyright Office continued to apply the same capricious standards.

This isn’t theoretical. On similar grounds of non-human authorship, the USCO denied that I could copyright versions of the painting upscaled via Gigapixel AI, a tool that simply enhances pre-existing details in an image without introducing any original elements. Similar tools like Photoshop filters are already in widespread use. Any remaining contrast between non-AI and AI-powered features of imaging editing programs will inevitably disappear as their developers introduce more powerful features.

This fight is not about me, and it’s not just about art. Already, my company Art Incarnate has been forced to come up with creative solutions for releasing fine art prints, like taking photographs instead so they can be protected as derivative works. Other workarounds like open source licensing can only do so much.


Businesses and entrepreneurs are rapidly adopting generative AI in other fields ranging from entertainment and media to publishing. At some point, though, the novelty will wear off and they will be forced to confront practical commercial challenges. Without assurances that they will be able to safeguard their intellectual property, or that they won’t be forced to jump through IP loopholes to prevent competitors from copying their work, they may be hesitant to continue investing in AI technologies.

Nor is this fight, as critics argue, a vehicle to cheapen the output of traditional artists. We are fighting to extend ownership protections to all creatives who utilize AI in their work, helping keep their livelihoods intact.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

Will AI Enhance or Destroy the Business of Media?

Matthew Chmiel
Head of Discourse

The largely agreeable July 28 virtual event sparked more introspection than discourse.

At our inaugural virtual event, we invited three guest experts to discuss the intersection of AI and digital media.

Nearly 30 of our ON_Discourse members joined us that day and were provoked by Dan Gardner and Michael Treff. I spoke to Dan before the event to get a sense of his goals for the discussion. “Everyone in our field expects AI to be a massive disruption, I was hoping to get past the hype and dig into the details.”

From my perspective, the discourse was light, almost too pleasant and agreeable to qualify as a deep discourse, but the good news is that the conversation isn’t over yet. Check out some of our key takeaways and let me know if anything you see here makes you want to push this further.

As always, the pull-quotes are not and will never be attributed.

Provocations Used

  • AI will destroy all digital distribution mode
  • Fake news is all news
  • LLMO (Large Language Model Optimization) is the new SEO

Discourse Rating

  • Agreeable — there were no obvious and direct disagreements in the group. Let’s turn up the heat next time! As always, send me a note and we’ll find a way to get your follow-up arguments into the mix.

Recap

This is going to destroy the
internet as we know it


Today’s internet is scraped and organized and ranked by Google. Those results are still, for the short term, listed as links that send users to owned and operated web pages. The introduction of chat-based interfaces, where LLMs process the information from the same web pages and then directly deliver that information, without links, is going to flip the paradigm and ruin the internet.  It will homogenize the sources of information in the eyes of audiences.

The key takeaway here is the irrelevance of owned and operated pages – pretty soon, it seems, the need for websites will fade away – as long as the information is available and presented in the chat query.

— Provocation – AI will destroy all digital distribution models

—— what did you think?

Or…..

It’s putting agency back
in the hands of the user

The evolution of prompting LLMs is a 3-way interaction between an individual and the LLM *and* with the community that is training the LLM. Publishers will have to think in new ways — they can’t just ship some content and stop thinking about it. The essence of this interaction model is that the content gets shaped by the prompts. This is empowering audiences in new ways. 

Forget that nonsense about irrelevant web pages, this new interface and behavior (prompting and querying) are going to reveal new needs among users that will deepen their relationship with the true sources of this information. The fact is, the distribution model for digital media has been fundamentally broken ever since Web 2. This might actually fix it!

— Provocation – AI will destroy all digital distribution models

—— what did you think?

But….

AI needs an attribution protocol

Several members and guests focused on a specific software requirement that will address questions of reliability and trust. The brands, voices, and processes that delivered the information must persist, in some way, in the final delivery of generated-information. In other words, if the LLM is scraping an article that comes from the New York Times, that brand and even that author must be clearly articulated in the answer. There is reason to believe this type of protocol can be administered because publishers have the leverage – LLMs need good source material to be relevant and this is putting the power back in the hands of the publishers. 

— Provocation – Fake news is all news

—— what did you think?

On the other hand…

IT SUCKS

The current state of audience trust in publisher platforms. Will LLMs really make it better? Anecdotal audience research shows that Apple News readers assume that Apple News is a news publication and that the articles they are consuming are coming from Apple News (not the brands and reporters that distribute from that platform). When you remove all links and bury attribution in generative text, you remove all chances of establishing a meaningful relationship between the audience and the brand that created it.

— Provocation – AI will destroy all digital distribution models

—— what did you think?

… but wait…

Young people are savvy about finding trustworthy sources in the new internet

They have a radar for authenticity which is the antidote to the synthetic content they’ve been receiving all their lives. They find authenticity from some brands, but mainly from influencers online. This isn’t always good though, because not all influencers are trust worthy. On top of all that, we don’t know if the AI itself is going to contain any bias that will influence its answers. 

— Provocation – Fake news is all news

—— what did you think?

… and…

AI is just software – just like everything else on the internet

SEO emerged as a discipline after some engineers reverse-engineered the ranking algorithm.  The same kind of reverse-engineering is going to happen with LLMs – even though they are exponentially more complex, it will eventually happen. And when it does happen, there will be more attributions embedded in LLM answers. This will drive up the potential costs that publishers can extract from LLMs because one of the key elements of an LLM – unlike the internet at large – “shitposting will not get you anywhere on ChatGPT.”

— Provocation – LLMO is the new SEO

—— what did you think?

… finally

The Web is undefeated

Many of our members and guests have been working in digital media for decades and they have read (or written) many pieces about the impending demise of the internet – and still it persists.

— Provocation – LLMO is the new SEO

—— what did you think?