Monopoly or Oligopoly, Seriously? With an idea like that, we will soon be crowning Oligarchs.  

Dan Gardner
Co-Founder & Exec Chair, Code and Theory
Co-Founder, ON_Discourse

Larry Muller
ON_Discourse Co-Founder, COO of Code & Theory

or
Should Bored Apes
Be in Charge of AI

I do not trust Bored Apes and they are not who should be regulating AI. But making Big Tech bigger by allowing them to be the regulators? That’s like wishing for bigger banks with more control? Ultimately, maybe not in the short term, but in the long term, it ends badly. We already have a too big to fail Big Tech problem, so we don’t have to double down on that.

BIG TECH

When we think of big tech, Apple or Microsoft and trust don’t go together. I’m pretty sure anytime either of those (or the others) have an opportunity to monopolize and create unfair advantages through their ecosystem, they do: Maps, Browser, Messaging, App Store etc. Ultimately innovation gets stifled, reducing the incentive to be inventive when your path can be crushed immediately by the players who are the gatekeepers of distribution.

The smartest people from AI and Big Tech including Sam Altman and others have signed petitions and spoken to congress about the need for regulation, but surprisingly those smart people also don’t have any tangible solutions (or they don’t actually want them?). How do we expect them to regulate if they can’t even propose a solution?

Should not

Ultimately, the cat is out of the bag. There will be too many spin offs in too many countries that can’t be regulated working on advancement of AI. The best we can do is to work on next generation cybersecurity, and think of fall back safe switches that will allow a shut down or disconnection of systems should AI fall into a bad actor’s hand (or worse out of any human’s hand). 

Every sector has regulation, so the government should continue to develop innovation friendly regulation. Government, although not perfect, is not driven by profit, but built to serve and protect. Regulation should still sit in their hands, with elected officials who are held accountable to the people.

GET BIGGER

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

MORE
ON_AI

Should

Bored Apes

be in charge

of AI?

Larry Muller
ON_Discourse Co-Founder, COO of Code & Theory
Dan Gardner
Co-Founder & Exec Chair, Code and Theory
Co-Founder, ON_Discourse

Do you believe in free markets? I do. I know a lot of people do. Competition drives innovation and fairness to the consumer. Regulation is stifling. It slows growth and free enterprise and prevents innovation.

Maybe.

Or maybe not.

or
Big Tech Should 
Not Get Bigger

Who do you trust? Bored Apes? Bankman Fried? Kevin Rose? Facebook? TikTok? Should they all have an opportunity to participate in AI? Have they earned it? There are thousands of others I could put on the list. Not necessarily “bad” enterprises or “bad” people (some are). Just companies and people who have not earned our trust. Imagine the mess they would all create.

Two companies that have earned our trust are Apple and Microsoft. Over the past 30 – 40 years they have performed through internet bubbles, financial crises, pandemics, recession, and inflation. They have satisfied the consumer in many ways and have not been accused of stealing (Maybe from each other). You may not agree with the 30% vig on the Apple App Store, but it’s far from stealing. It has created safety and standards which have provided trust at scale for consumers.

These are two trusted vendors to the world. Many startups became rich doing business with both. As we complain about their control and the unfairness of their system they have helped change and enrich the world. Why wouldn’t we want them to be the stewards of the AI revolution? Can you think of better companies to safely lead us? 

They are like the better version of cable companies from 30 years ago when there was no competition, but everything was priced fairly, and pretty much everything worked. Yes, everyone “hates” the cable company, but the value exchange overall worked. Their service could always improve (which has been over the years), of course. Nothing is perfect. People complained about the rising costs, but it beats the cord-cutting world we have now where we subscribe to multiple streaming services. Most people’s costs are higher now, with a lack of any unified bill or understanding. 

Monopoly

Or another example was the phone companies of the 60s and 70s or the utilities. Boring and regulated and licensed. One choice, no confusion of clear value, no scammy deals, no discounts rolling off, and being billed 3x. Choice is not always better.

There is a lot of talk about AI regulation. You even have Sam Altman going to Congress and acknowledging the need. Some have argued, he has openly asked Congress for it because he knows the government has no ability to do it under the current spectrum of thinking. AI speed of innovation is almost too fast for a governmental agency to stay relevant with all the top paying jobs going to the private sector. Government typically is inefficient in its approach and is mired with challenges that the private sector doesn’t have. So what is the balance of government control and regulation and the freedom of the capital market that has spawned such innovation? A license to big tech. Similar to the cable, phone and utility companies mentioned above. It’s not perfect, but it has worked.

or Oligopoly

Why not sell them a license to run the AI infrastructure and close the door behind them? Build the moat. Charge them each $500,000,000,000 (That’s 500 Billion) for 10 years and let them go to work. Still, Regulate the shit out of them. Let Apple make its 30%, but they can police the AI models and keep us safe. They have proven the scale with the Apple App Store. Really – who else are you going to trust? They earned it, let them have it. And let them pay for it. And let them hopefully make money off of it. It’s a system that works. 

We hear it’s not fair to other entrepreneurs. Tough shit. Let them make the killer apps. Let’s put the grown ups in charge. Really, just imagine for a second if the free-for-all we experienced with Bored Apes and Moonbirds and Binance happened in AI. The risks we would all be taking putting that trust into the hands of greedy amateurs. Someone will win – let’s predetermine that it is someone we can trust.

Clearly more details need to be worked out. That is welcome, even the disagreement. But we’ve yet to see an actual solution better than this, because history has shown us that private market with regulation is the superior way to allow innovation while providing control, while actually generating revenue (through licensing) to the governemnt instead of using tax payers to accomplish a far more inferior solution.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

MORE
ON_AI

Boomers vs Juniors:
Easy choice for emerging tech

Larry Muller
ON_Discourse Co-Founder, COO of Code & Theory

It’s a clear answer. Juniors (new workplace entrants) have been using technology their whole lives. It’s second nature to them. Of course, they have a clear advantage. You build a team of young energetic workers, you get great creativity, teamwork, and the natural depth of skill that has been honed since birth. A company filled with these go-getters should certainly have a leg up.

On the flip side, you can piece together a team of rethreads who are hanging on in the workplace. They have never seen this particular technology before and all tech seems to be a struggle for them. They are stuck in old ways of thinking and fall back on their old ways of solving problems. A company filled with these people must be destined to fail.

BUT

The potential of generative AI changes all of that. Its best ability is executing tasks. It can write code. It can write copy. It can create imagery. Easier and faster. You don’t actually need technical proficiency to execute almost anything, because the technology simplifies execution to simple semantic commands. The advancement is continuing to compound at a rate where it can execute more and more with proficiency and ease. 

What can AI not do? Use Judgment.

So is AI changing the age-old story where the older you get the less relevant you get? Who will really provide outsize value in a workplace defined by increasing automation and the need for great judgment?

The Boomers

Because of their years of experience and perspective

They:

  • Are thrilled about the opportunity.
  • Understand that they don’t have all the answers.
  • Have adapted to new technology many times in the past.
  • Are eager to mentor others and be good teammates. They know that it’s not about themselves.
  • Understand that the clock is ticking. They’re not trying to conquer the world –  they simply just want to help the team win.
  • Know it’s not about what they get, it’s about what they give.
  • Are a proven hard-working commodity. Sure, they may need a nap, but they will be back refreshed.
  • Have a perspective that drives creativity, humility, and most importantly they have JUDGMENT.

The Juniors

Because of their lack of experience and perspective

They:

  • Think they know.
  • Think it’s about them.
  • Don’t have much of a clue about leadership and the importance of selflessness.
  • Are more worried about what they get, not what they give.
  • Don’t want feedback. They want to be right.
  • Have limited judgment, however, they believe their judgment must be right.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

It’s Game Over

for Volunteer-Driven

Social Media

Reddit’s blackout issue points to a chasm between its ownership and its volunteer moderators. It also points to the fact that those moderators, like other prominent social media users, should be paid.

Ernie Smith
Ernie Smith is the editor of Tedium, a twice-weekly internet history newsletter, and a frequent contributor to Vice’s Motherboard.

There’s probably no stronger sign that social networks’ tendency to lean on the free work of its users was a losing strategy than the drama happening at Reddit.

You’ve probably heard about the saga that emerged after Reddit decided to begin charging for its API—for one, claims that popular app creators would be on the hook for bills in the tens of millions of dollars. But in many ways, the real story is how quickly the site’s own volunteer moderators turned on the network, leading to a blackout with extended impacts and a CEO who has responded defiantly to his users.

These moderators largely aren’t talking about the blackout in financial terms. But perhaps they should be. In a recent interview with NBC News, CEO Steve Huffman implied that the problem was that the leaders of these communities had too much power over how they were run, even though they are doing so on a volunteer basis.

The people who get there first get to stay there and pass it down to their descendants, and that is not democratic,

Huffman said in reference to long-time moderators of the platform.

Huffman’s comments hint at something that I have noticed from many social networks over the years: A failure to see what leaders of online communities do as worthy of compensation or an equity stake. It’s a structural issue, one that appears to have existed from the start, but has reentered the public conversation recently.

Those who have been watching closely, however, might have seen signs of this problem simmering beneath the surface. Since 2020, Reddit has been legally entangled with the founder of a popular subreddit, r/wallstreetbets. Jamie Rogozinski, along with other moderators, built the group to high-profile mainstream success, but when he took steps to commercialize the group—selling a WallStreetBets book and filing for a WallStreetBets trademark for merchandising, media, forums, and entertainment—Reddit booted him from the group, claiming what he was doing wasn’t allowed.

Reddit then formally opposed Rogozinski’s trademark filing on the grounds that it owned the community and that the trademark would create confusion in the market. Rogozinski sued Reddit in early 2023, claiming that Reddit’s terms of service, which it says Rogozinski violated, effectively make it possible for the company to steal the intellectual property of its users.

“My real issue stemmed from trying to claim ownership over my creation,” he wrote in an IAmA thread. “Reddit systematically takes intellectual property from its users by registering trademarks, and I posed a threat to this.”

In effect, the WallStreetBets creator—whose subreddit directly inspired a forthcoming Hollywood movie starring Paul Dano and Pete Davidson—is challenging the legality of a policy that, in Reddit’s view, favors the power of the crowd over the work of the individual creator.

Rogozinski’s legal action is part of a long legacy of lawsuits by volunteers who felt their relationship with a tech company had crossed the line into work.

What Reddit Has in Common With AOL

In the early boom years of the internet, between 1990 and 2000, America Online convinced thousands of volunteers to take on various support roles to help keep its landmark online service running.

At first, users received free accounts and access hours in exchange for this work, which had developed in a similar sense of community spirit as other online communities like The WELL and CompuServe, as well as later communities like Reddit.

These roles, like on Reddit, were initially seen as collaborative. But as AOL grew larger and more dominant, its valuation grew to massive numbers, and the program grew increasingly exploitative. That led to claims the company was running, as Wired put it, a “cyber-sweatshop.”

This led to lawsuits, both individual and class-action, that ultimately took a decade to resolve. By 2005, after years of negative press, AOL no longer had a volunteer program. A couple of years later, the company paid a $15 million settlement to its army of volunteers.

A more recent parallel also involves a company AOL at one point owned, The Huffington Post, which used a free contributor model for years. It, too, faced lawsuits over the matter—and it, too, shut down the model. When AOL bought HuffPost, observers pointed out the parallels between the news outlet and AOL’s own volunteer army.

There’s a cultural chasm that networks like AOL can cross when something turns from volunteering into working for free. With the recent conflict, Reddit likely crossed it. It could find itself in similar hot water with its moderators, especially if Huffman makes good on his threats to boot some of them out.

Paid Creators & Moderators, Not Volunteers

Generally, I am strongly in favor of social networks having models that support contributors that play important roles in their networks, whether prominent creators, popular influencers, or moderators. Sadly, and frustratingly, many do not.

All commercial social networks should have tools available to make possible a capability for creator monetization, and those that don’t are failing to do their job as social networks. When it is not baked into the network’s DNA, you gradually see problems like the ones Reddit is currently facing, where the goals of Reddit’s user base become opposed to what its ownership wants. If moderators were getting some kind of direct financial support, the odds this would happen would decrease.

Compare this to YouTube, a platform that was founded just four months before Reddit, and has a thriving community, in part because creators are compensated based on the success they bring. 

You might think that YouTube and Reddit are apples and oranges, but when you break it down, the work of a YouTube channel operator is in many ways similar to a Reddit moderator, requiring keeping a close eye on comments, setting an overall vibe, and developing content that the community then reacts to. The main difference is that Reddit moderators generally don’t make videos.

Now, Reddit could do things like share banner ad revenue with moderators, offer tipping functionality, or let moderators paywall content, but unlike many creator communities such as Substack or Patreon, they don’t. (But, as recent events have shown, they can still get fired.)

And yes, creators do care about this: When the Twitter-owned Vine refused to pay its largest creators and didn’t solve longstanding product issues, many of them went to YouTube—likely killing Vine in the process.

Discover More
Discourse

Future social networks may be less likely to make the mistake Reddit is making. Earlier this year, I talked to Nico Laqua, the CEO of a network called Picnic, a Reddit-meets-TikTok site that is growing in popularity with teenagers. He specifically cited the issues with r/wallstreetbets in choosing to build a model where moderators will have fractional ownership and revenue-sharing equity in the groups they build and support.

“We want to take the exact opposite approach—the YouTube approach, where we not only share advertising revenue with the communities but allow them to own and govern themselves in whatever way makes sense,” he told me.

If Picnic someday finds itself picking up some of Reddit’s users, this design structure could make all the difference.

When a social network refuses to compensate its best users, it can all too easily turn into an us-against-them fight—and it can poison the community for good.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

Moish E. Peltz, Esq.
Partner, Falcon Rappaport & Berkman
Intellectual Property Law and AI: The Future of Athlete Branding

The rapid technological advancements in artificial intelligence are reshaping numerous industries, with sports and athlete branding emerging as a potential area of transformation. However, burgeoning AI technology raises compelling legal and intellectual property issues for athletes, law practitioners, and other stakeholders.

Traditionally, professional athletes have been able to commercialize and protect their name and brand. More recently, since the U.S. Supreme Court’s June 2021 ruling in NCAA v. Alston, even amateur collegiate athletes have increasingly benefitted from this same concept, after gaining the ability to monetize their name, image, and likeness (NIL). 

Now with the advent of powerful consumer AI tools, all athletes have the means to use their brand and likenesses to generate new revenue streams. Examining past experiences in licensing likeness can shed light on these opportunities and concerns. 

AI will

Consider the music industry. In April, artists Drake and The Weeknd suddenly had a new hit collab titled “Heart on My Sleeve.” The song had hundreds of thousands of streams on Spotify and Apple Music. But, as it turned out, the song was an AI-created knockoff. Neither of the artists had actually recorded the music.

In response Universal Music Group, the record label for the artists, swiftly moved to have the song removed from major streaming platforms, and went a step further, asking the platforms to prevent AI companies from using their catalog to train generative AI tools. However, another artist, Grimes, moved to “open-source” her voice using AI tools, allowing fans to share 50% of the royalties generated using her newly created GrimesAI-1 voice generation platform.

A recent television commercial featuring an AI-generated young Charles Barkley demonstrates another potential use of AI-created content in advertising. Sports betting company FanDuel created an authorized version of a young Charles Barkley in his NBA prime to star in a television ad opposite the present-day retired NBA player-turned-announcer, to hilarious results. Here, the laws that govern the commercial use of an athlete’s likeness could apply, but new stipulations addressing AI applications would be crucial.

Transform

And then consider the video gaming industry, where athletes have licensed their images for use in popular games like FIFA or Madden for decades. Recent AI technology promises to allow gamers to interact with real-time generated avatars with in-game voices and faces that respond to a gamer’s input, such as questions. 

Given that we have already seen popular social media personalities which are entirely robotic or virtual, such as Miquela or FNMeka, it is not difficult to imagine interacting in real-time with your favorite athlete, perhaps even in a version of FIFA or Madden in the future. 

Generative AI can already help create engaging avatars by simulating an athlete’s tone in social media posts or in interactive chatbots. The possibility of personalized fan engagement through these kinds of AI-generated means could bolster an athlete’s brand in unprecedented ways.

Athlete

This arrangement is potentially beneficial for many in the industry. Gaming companies can release more engrossing games. Gamers could get a more interactive experience of their favorite players, albeit an AI-generated version. Athletes and leagues stand to earn more revenues and establish a more personalized connection with their fans. 

However, as athletes enter this new era of branding, they will want to consider the legal implications, looking at the current landscape as a guidepost for licensing AI-generated use cases. For example, the extent to which AI-generated works are protected by US Copyright law, if at all,  is now under question, and many open legal questions remain. Such a world would require athletes, teams, leagues, agents, lawyers, and the industry write large to reimagine already complex contractual agreements, necessitating legal adaptations to accommodate AI technologies.

Some bedrock principles still apply. AI’s potential impact on personal branding in sports could potentially be colossal, but athletes must still take a proactive legal approach as build, protect and grow their brand, secure any relevant IP rights, and operate with a wholistic and long-term outlook as they seek to monetize their NIL rights. 

Branding

or
Did Tom Brady
Really Say That?
Tony Iliakostas
Adjunct professor of Entertainment Law and IP at New York Law School

Moreover, it’s crucial to consider potential legal risks or drawbacks to an athlete’s brand. The use of AI technology raises the possibility of diminishing brand value if not properly managed, leading to a watered-down fan experience. As AI tools become more sophisticated, bad actors could create unlicensed deepfakes of athletes, leading to misuse, misrepresentation, or outright fraud. These realities underscore the importance of robust IP laws that are adaptable to technological changes, and that athletes and their advisors know how and when to use legal remedies.

Athlete branding will evolve along with the next generation of AI technology, and so existing IP laws must evolve to address the complex challenges that might arise. While there is no doubt that AI holds exciting prospects for athletes, it also opens a potential Pandora’s box of legal issues. Athletes, alongside their advisors, lawyers, and agents, will need to navigate this uncharted territory with foresight and diligence, ensuring the protection of their likeness in the evolving landscape of AI technology.

This article is for informational purposes only and should not be construed as legal advice. Feel free to reach out to Moish Peltz, Intellectual Property Practice Group Chair at Falcon Rappaport & Berkman LLP, with specific questions at mpeltz@frblaw.com.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

More

The Future is Fan Controlled

The Future of Golf Will Be Simulated

Game Over? The Uncertain Future of Live Sports

The Death of Live Sports is Greatly Exaggerated

Live Sport Needs to Embrace the Realities of Reality TV

Tech: The Uncanny Valley of Fan Experience

The Fan Experience Event Preview

LIV Golf’s Playbook: Innovating Without Losing Traditional Fans

Home Field Advantage: The Community Experience is Paramount

Fans Are The New Free Agent

Did Tom Brady Really Say That?

An NBA Player Surveys the AI Opportunity

AI: A Goldmine or a Landmine For Athlete Brands?

Intellectual Property Law and AI: The Future of Athlete Branding

Media Companies

Shouldn’t Reject

Generative AI

They Should

Build Their Own

Matthieu Mingasson
Head of Design Transformation at Code and Theory
or
Disclosing AI Use
in Reporting: It's Futile
Michael Nunez
Michael Nunez is the editorial director of VentureBeat, where he leads the coverage of artificial intelligence and enterprise data.

The responses from the media industry to the explosion of generative AI have been sharply divided. We’ve seen fear over potential job loss, dire warnings over the potential for AI-generated misinformation, and some tepid statements about its positive potential for the industry.

Smart publishers will realize that there’s enormous potential. By acting quickly and boldly, these companies can find new ways to drive value and monetize this disruptive technology.

Globally, the AI in the media market is anticipated to hit $99.48 billion by 2030. There’s potential across the newsroom, from improving and scaling workflows to content management and personalization.

It’s easy to view disruptive technologies, like LLMs, with distrust. Many organizations are uncertain about where to begin.

Rather than fear generative AI, media companies should approach it as a springboard for business innovation. Rather than reject generative AIs, media companies should build their own.

By building their own LLMs, media companies can chart their own course in a rapidly changing landscape and help create the future of brand experiences.

Train it yourself

Commonly used AI-generated conversational services like ChatGPT and Google’s Bard have the incredible ability to mimic human language by assembling words based on a technique called “word embedding” that organizes words and sentences based on their semantic proximity. This technique produces compelling and accurate responses when the subject matter is largely known by the LLM. But when concepts or information are missing in the corpus used by the LLM, GPTs fill the gap with fabricated, approximate answers that can be plain wrong in some cases. We call those answers “hallucinations.” 

There are other drawbacks to these models. ChatGPT, for example, isn’t a dependable or exhaustive source. The system is trained on data up to 2021, which means that companies who rely on real-time information, like the media, will be working with outdated information. The system is trained on a range of internet text data that can include biased data and misinformation. Filters aren’t robust enough yet to identify inappropriate content.

This is very much a challenge for media organizations that deal with facts, real time data, news, and implies that every single word provided by a GPT must be verified by a human.

So in this context the question is: How can media companies leverage generative AI technology to accelerate content identification, production and distribution, while maintaining competitive advantage against search engines? 

At a high level, news platforms produce a range of content that can be mapped against a spectrum that goes from “pure fact” (weather, stocks, sport results, for example) to “pure stories” (political op-ed, interviews, critiques, etc.) and includes anything in between. Search engines have long won the battle of distributing pure facts directly to your mobile so audiences often don’t even need to visit their news websites for that information.

Publishers remain for now the true owners of interesting stories, authorship, passionate debate, and opinions. But what the new generation of LLM/GPT-based search engines and conversational bots seem to be doing is climbing up the spectrum of information from pure fact towards “human-sounding” stories, due to their ability to mimic human language. 

Thus, a new competitive landscape is appearing where search engines are no longer limited to deliver weather and stock prices.

In the light of this new competitive landscape, media organizations cannot wait for the tectonic shift to happen. They must begin to train themself today with generative AI, even if it’s imperfect, unreliable, and untrustworthy, and build muscles with simple AI-powered workflows for newsrooms, train their teams to use it, get smart on how to train their own LLMs and automate content creation and distribution in lower-risk categories, while keeping search engines at bay.

Using generative AI to customize your content

Like other forms of AI, LLMs can be adapted and customized to suit a specific domain and use case. The media industry is already built around creating and curating content for audiences. Generative AI is simply another tool to facilitate this.

Media companies can develop their own LLMs to augment their brand voice and enhance storytelling. But they have to focus on credibility and the authorship of their content. 

Media organizations need to embrace AI now so they can learn how to swim in shallow water. When the ocean comes, they will be ready.

Existing open-source LLMs already provide an advantage for companies looking to utilize generative AI. A media company can customize an existing foundational model – one where a great deal of development has already been achieved – by training the LLMs on proprietary, internal data in a secure environment. The result is a “fine-tuned” LLM that is purpose-built for the specific use case of the media organization.

Creating Future Brand Experiences with Generative AI

Whether companies are ready or not, the future of the media revolves around generative AI.

Bloomberg is leveraging freely accessible, off-the-shelf AI techniques and applying them to its substantial repository of proprietary data. Bloomberg GPT, as its technology is dubbed, is built using the same foundational technology as OpenAI’s GPT.

The Bloomberg GPT model is trained on non-financial sources across the web, like YouTube subtitles, combined with 100 billion words from datasets that their financial firm has accumulated over twenty years. 

This addition of Bloomberg’s unique training data improved performance and accuracy for financial tasks to such a degree that the company intends to integrate Bloomberg GPT within various services and features.

Media companies should take inspiration from early LLM and generative AI adopters when approaching their own generative AI strategies.

In this rapidly unfolding landscape, media companies can set a bold standard for industry innovation by iterating upon the same powerful generative AI tools that large enterprise companies are already capitalizing on.

News agencies can use generative AI for data analysis and content development according to user preferences and trends. Music production companies can use generative AI for music composition and mastering based on a user’s mood or preferred genre. The potential of LLMs and Generative AI to transform the future brand experience are abundant.

If organizations can navigate the pitfalls of generative AI – like questions around accuracy, bias, trust, authorship, data, and brand experience – they can position themselves for both scale and innovation.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

Tony Iliakostas
Adjunct professor of Entertainment Law and IP at New York Law School
Did Tom Brady Really Say That?

During a stand-up comedy routine in late March 2023, former NFL MVP and seven-time Super Bowl champion Tom Brady addressed the question that was probably on a lot of people’s minds since he finalized his divorce from his ex-wife Gisele Bundchen in October 2022,   

The answer is yes, I’m still having sex with supermodels. That’s never going to change.

He went on to talk about his playing career, defecating, World War 3, and much more. But here’s the thing: Tom Brady never said these things.  

Tom Brady’s “comedy routine” is the product of Will Sasso and Chad Kultgen, the hosts of the podcast Dudesy, who used generative artificial intelligence to create a one-hour-long comedy routine that mimics the voice of NFL great Tom Brady.

This comedy routine was available as a perk on Patreon to Dudesy subscribers, but the comedy bit quickly proliferated on the internet. Other podcasts, including the Pat McAfee Show, commented on how life-like and realistic Tom Brady’s comedy routine was.

Unfortunately, this came at a cost; TMZ reported that Sasso and Kultgen received a cease-and-desist letter from Brady’s attorneys, alleging that Dudesy infringed on Tom Brady’s personality rights, namely the use of his voice, to create this comedy special.

According to Sasso and Kultgen, the same cease-and-desist letter threatened a personality rights infringement lawsuit and demanded that the fake comedy special be removed from all social media platforms and anywhere the comedy routine was disseminated. Sasso and Kultgen complied.

The Broader IP

In April 2023, the German tabloid magazine Die Aktuelle published an interview with famed F1 racer Michael Schumacher. According to the tabloid, this was the first time Michael Schumacher publicly spoke to the media since suffering a near-fatal brain injury in December 2013 while skiing. In the article with Die Aktuelle, Schumacher commented on his skiing accident that led him to step away from F1 racing. However, at the conclusion of the article, the tabloid magazine disclosed that this interview with Michael Schumacher was, in fact, completely AI-generated. Michael Schumacher’s family was extremely upset, and rightfully so, and planned to pursue legal action against the publication. The editor in chief of the magazine was fired.

Generative AI platforms like ChatGPT, Midjourney, and Dall-E have proliferated at an exponential rate. As of January 2023, OpenAI reported nearly 100 million users signed up for ChatGPT since its public launch in November 2022. Dall-E reached 1 million users after 2.5 months. According to Statista, the adoption rate of generative AI in the United States in the workplace is averaging 28% across Gen X, Millennials, and Gen Z. There’s also no denying how innovative generative AI technology truly is. You can go into Chat GPT and ask the system to write a poem about what a great basketball player Michael Jordan was and the AI will do all the work for you. AI systems can generate artwork of things we could never conceive of in real life. But while the creative potential of generative AI is clear, there are also plenty of risks and unanswered questions. There is a great deal of legal ambiguity that stems from AI creation and use.

This isn’t the first time though that generative AI has gotten into hot water with an athlete’s personality rights.

and Constitutional Implications

Most legal issues involving generative AI stem from copyright law. In a recent decision, the US Copyright Office made it clear that AI-generated works, including AI-generated artwork, will not receive any copyright protection because AI-generated works lack the human authorship required for protection under copyright. But perhaps one of generative AI’s biggest foes is the personality rights law.  

Unlike copyright, trademark, patent, trade dress, and trade secret law, personality rights are the only area of intellectual property law that is not regulated at the federal level.  Currently, there are only 25 states that have adopted personality rights legislation, also known as the “right of publicity.”.  Other states adopt a common law standard.  Regardless, the standard is the same across the board: personality rights involve the use of one’s name, image, and likeness for commercial use. If anyone uses someone’s personality rights for commercial use without that person’s consent, that qualifies as a personality right infringement. One of the cornerstone personality rights infringement cases is the iconic 9th Circuit decision of Midler v. Ford Motor Co 

In this case, Ford approached Bette Midler and her agent for permission to use her song “Do You Want to Dance” in a commercial campaign for Lincoln Mercury cars.  Bette Midler denied permission so instead, Ford hired Midler’s backup singer Ula Hedwig to sing the song like Bette Midler. The commercial went live and Midler sued Ford, arguing that recreating her voice was a direct infringement of her personality rights under the California Celebrities Act. The 9th Circuit sided with Midler, arguing that use of her voice (albeit a recreated version) to solicit car sales was an infringement of her personality rights. The 9th Circuit further added that Midler’s voice is as much a part of her identity as her actual name, image, and likeness. 

of Generative AI

Bringing this back to generative AI, it’s evident that using AI to recreate one’s personality rights, including one’s voice, can create serious legal issues. Cases like Midler v. Ford serve as a warning signal that using AI to imitate a celebrity or athlete is an infringement-worthy activity if you are using that material for commercial purposes, like an advertisement. And then you have situations like the Brady-AI comedy routine, where Tom Brady’s voice was recreated to make crude jokes. While some legal critics may argue that such activity infringes on Brady’s personality rights, a majority, including myself, may argue that such activity is freedom of expression that is protected under the First Amendment. Parodying a public figure is something that the US Supreme Court has regarded as a fully protectable interest under the Constitution. Essentially, Dudesy’s Tom Brady AI-generated comedy routine is no different than SNL doing a parody of Donald Trump or Joe Biden if the whole point is to parody their personas. That type of commentary is an entirely protectable interest under the First Amendment.

But what about Michael Schumacher’s fake interview with a German tabloid magazine? I would lean towards saying that, in the United States, such behavior is defamatory. Defamation is a cause of action that falls under the First Amendment of the US Constitution and it occurs when someone makes a false statement about someone else and damages their reputation. For public figures, they have the burden of proving actual malice, meaning that the offender knew that the statement was false and with reckless disregard still made the defamatory statement. While the Schumacher interview would be governed under German law, it would be hard to dismiss that such behavior here in the United States meets the threshold of defamation. Imagine if a newspaper or news outlet framed an AI-generated interview with an athlete as a real one that paints said athlete in a bad light. Not only is there an ethical dilemma, but such behavior triggers a cause of action of defamation.

It’s evident that using AI to recreate one’s personality rights, including one’s voice, can create serious legal issues.

Use Among Athletes

On the contrary, I think the innovation of generative AI opens the door wide open to sparking creativity and ideas. As it pertains to the sports industry, generative AI presents a plethora of great opportunities. However, there are more questions than answers concerning artificial intelligence and its place in the legal field. The lack of regulation on the local and federal levels also doesn’t help the matter. At the end of the day, we must regard the AI landscape as the Wild West. It is a poorly regulated terrain, but I think prior case law and any impression of existing statutory language give us a roadmap of how to approach regulating this budding industry. In the meantime, athletes should be cautious of working with any brands/entities using generative AI and should ensure that they are legally protected by way of a formal contract.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

More

The Future is Fan Controlled

The Future of Golf Will Be Simulated

Game Over? The Uncertain Future of Live Sports

The Death of Live Sports is Greatly Exaggerated

Live Sport Needs to Embrace the Realities of Reality TV

Tech: The Uncanny Valley of Fan Experience

The Fan Experience Event Preview

LIV Golf’s Playbook: Innovating Without Losing Traditional Fans

Home Field Advantage: The Community Experience is Paramount

Fans Are The New Free Agent

Did Tom Brady Really Say That?

An NBA Player Surveys the AI Opportunity

AI: A Goldmine or a Landmine For Athlete Brands?

Intellectual Property Law and AI: The Future of Athlete Branding

Spencer
Dinwiddie

Spencer Dinwiddie
NBA Athlete and Entrepreneur

The first line of my bio reads "NBA player," but I always tell people I’m a tech guy with a jumper. My fascination with emerging technology was the prime motivating factor for co-founding Calaxy, a new platform empowering Creators to connect with their fans and monetize their brands easier than ever before.

That fascination with new tech is also why I jumped at the opportunity to invest in Genies and explore the worlds of AI, blockchain, and virtual and augmented realities.

Recently I fell down a rabbit hole thinking about the advent potential of AI-generated avatars and conversational bots and how they might be able to help revolutionize fan interaction.

Before this technological era, a person couldn’t be in two places at once or appear to come back from the dead. But this technology — which seemed so novel and impossible to fathom — is now right at our fingertips.

The application of AI-generated avatars seems obvious, now. It could allow me — or anyone else that has so many demands on their time that they have to say no to things they’d like to be doing — to work in new ways. I could take video footage, photos of myself, words I’ve authored, and other content I’ve generated and use these materials to help train AI models in the hopes that they can extrapolate my personality and then appear and interact on my behalf, sort of like digital clones who works for me.

Surveys the

Anyone can see the benefits of this. People who want to interact with me but otherwise wouldn’t have that access are suddenly able to. And from my end, I’m able to say yes to things that otherwise wouldn’t have fit into my schedule.

It’s a game-changer.

And because AI models can evolve, digital representations of my personality could learn and grow over time.

This digital version of me would also be more knowledgeable about the world because it would be digesting information at a rate impossible for humans to keep up with. So it would pretty quickly become a smarter and wittier version of myself, if that’s something you can imagine.

But, AI-generated avatars could have some potentially terrifying results as well. With this tech, deep fakes are a real concern, especially as the line between what’s real and what isn’t gets blurrier and blurrier over time. And what happens if my AI-generated avatar is interacting with a fan and says something horrible? The backlash for those words or actions will land at my feet and impact my brand, not some AI-generated version of me. The avatar doesn’t have a reputation to worry about, but I do.

We need to figure out how to use this technology responsibly and ethically.

AI Opportunity

There is some suggestion that blockchains and other immutable technologies could potentially help mitigate some of these concerns because they can ensure credibility and verify the authenticity of content, but we’re not going to be able to put the genie back in the bottle, no pun intended.

Another challenge is the lack of regulatory clarity. AI is advancing so quickly that it’s tough to keep up with the rules. We need to figure out how to use this technology responsibly and ethically. It’s a learning process, and we have to be careful not to cross any boundaries that we’ll regret later.

And let’s not forget the bigger picture. AI, if not properly controlled, could become a serious threat to humanity. We’ve all seen the Terminator movies and I’m not trying to be John Connor, so we’re going to need to be cautious and establish safeguards to prevent any unintended consequences. We don’t want to unleash something that we can’t handle.

We just have to remember that we’re still in the early stages of generative AI. If this were a basketball game, there’d be 20 seconds left on the shot clock of the first possession of the first quarter. There’s a ton of game left to play and we’re just scratching the surface of what this technology will ultimately do. It’s an exciting time, but we need to approach it with an open mind. We’ll learn as we go, and it’s important to keep evaluating, adapting, and having conversations about the best ways to use this technology…like enabling me to be swimming at the beach in Cannes and on-stage delivering a keynote address simultaneously.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

AI Goldmines and Landmines

Did Tom Brady Really Say That?

Tony Iliakostas
Adjunct professor of Entertainment Law and Intellectual Property at New York Law School

The innovation of generative AI opens the door wide open to sparking creativity and ideas. As it pertains to the sports industry, generative AI presents a plethora of great opportunities. However, there are more questions than answers concerning artificial intelligence and its place in the legal field.

and

Is AI a Goldmine or a Landmine For Athlete Brands?

Toby Daniels
Co Founder On_Discourse

This new age might be a goldmine for athletes, but they’ll have to avoid the landmines first.

AI’s potential role in enhancing personal branding in sports cannot be overstated.

Is AI a Goldmine

or a Landmine

For Athlete

Brands?

Toby Daniels
Co founder ON_Discourse

You're a sports fan in 2028.

You log into your fitness app and a chat pops up -- it’s an AI workout assistant that looks and talks like your favorite NBA star. You input your training goals for this week and get back a personalized diet and training plan based on your needs.

You pop on your VR headset and fire up Madden 2028 -- you scroll through a roster of thousands of hyper-realistic AI-generated players from throughout time and create your team. You head onto the field.


Later, you turn on SportsCenter and watch as Stephen A. Smith debates an AI-generated young Charles Barkley. They’re arguing over who’s wearing the sharper suit, which you can buy using your Apple Cash directly from your headset.

AI – with its seemingly endless ability to create, analyze, and mimic – is transforming industries at a breakneck pace. Athletes are uniquely positioned to capitalize on this tech to reimagine sports branding. 

Athletes, bound by the constraints of time and resources, now have the potential to leverage their likeness and scale their brands in innovative ways to engage fans.

AI also poses considerable risk. It’s uncharted territory with pitfalls that range from unauthorized deepfakes to AI-generated communication that fans find inauthentic.

This new age might be a goldmine for athletes, but they’ll have to avoid the landmines first.

AI’s potential role in enhancing personal branding in sports cannot be overstated.

Already, AI-driven analytics can synthesize vast amounts of data about fans' behaviors and preferences, allowing athletes to tailor their brands to better target individuals. 

Personalized fan experiences, from AI-curated content to virtual meet-and-greets, are poised to redefine the fan-athlete dynamic, creating a stronger and more direct connection. Generative AI models could be trained to replicate an athlete’s voice and tone. These models could then be used to create unique content in the style of the athlete, which could then be targeted at fans who are most interested in the topic. 

The economies of scale enabled by automation, including aspects of content creation, also enable athletes with smaller followings to boost their brands and reach more fans. This shifts the power dynamics within the sports industry, placing control back into the hands of the athletes.

But these tools are not without risk; there is also the potential for AI to erode the value of an athlete’s brand. Automated content, while efficient, lacks the genuine human touch and authenticity that fans often seek. The uniformity resulting from AI processes might also lead to diluted brand identity, reducing differentiation and competitive edge.

Equally, the potential for misuse is significant.

Deepfakes, AI-manipulated images or videos that often appear authentic, present a risk even today. As the technology improves – and the ease of creating convincing synthetic media rapidly increases – public figures will have to reckon with the potential consequences of false narratives being planted by fake versions of themselves. The current legal framework, predominantly designed for a pre-AI era, struggles to tackle these novel challenges.

Image rights, the linchpin of an athlete’s brand, face an unprecedented threat with the advent of AI. The ability of AI to create lifelike digital personas of athletes, and use them in a myriad of contexts, raises complex issues surrounding consent and ownership. An entirely new framework for licensing athlete likenesses – and for objecting to the use of unlicensed, AI-generated likenesses – is needed. 

The Threat to Brand Ownership and Authenticity

Need for Regulation

Contracts and the legal framework need to evolve to address the challenges posed by AI, protecting athletes' image rights and preventing misuse. Transparency and ethical considerations must guide the deployment of AI in sports branding, ensuring it enhances rather than detracts from the athlete’s brand value.

The emerging age of AI offers a wealth of opportunity and a chance to redefine the athlete-fan relationship. In the delicate balance between scaling and protecting an athlete’s brand, AI represents both a goldmine and a landmine. As we chart this new terrain, the challenge lies in unlocking the promise of AI while safeguarding against its perils.

Are these fears going to stifle creativity, innovation, and commercial growth? Humans have proven themselves eager to jump at hyped, if unproven tech, that promises financial gain. Just look at crypto! Will athletes fall into the same trap and pivot too quickly to using AI to redefine their personal brands? Maybe? Probably. 

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

AI Goldmines and Landmines

An NBA Player Surveys the AI Opportunity

Spencer Dinwiddie
NBA Athlete and Entrepreneur

Recently I fell down a rabbit hole thinking about the advent potential of AI-generated avatars and conversational bots and how they might be able to help revolutionize fan interaction.

Before this technological era, a person couldn’t be in two places at once or appear to come back from the dead. But this technology — which seemed so novel and impossible to fathom — is now right at our fingertips.

and

Did Tom Brady Really Say That?

Tony Iliakostas
Adjunct professor of Entertainment Law and Intellectual Property at New York Law School

Athletes, bound by the constraints of time and resources, now have the potential to leverage their likeness and scale their brands in innovative ways to engage fan.

But AI also poses considerable risk. It’s uncharted territory with pitfalls that range from unauthorized deepfakes to AI-generated communication that fans find inauthentic.

Do You

Want to

Change

Your

MIND?

Dan Gardner
Co-Founder & Exec Chair, Code and Theory
Co-Founder, ON_Discourse

When was the last time you felt like you had your mind changed?

When you engaged in a discussion with someone you respect and who has the experience to speak knowledgeably about a topic but had a completely different point of view than your own?

Imagine a space where you would find yourself thinking differently about the most important decisions you make.

As the founder and chairman of a creative and technology-driven consulting business with almost 2,000 employees, I think creating space to be challenged and to be able to challenge others' perspectives is essential to business success.

So why a member’s media company?

We believe “discourse” has become a bad word in media. In practice, it’s been used as a form of trolling or to express negativity around someone’s opinion, instead of embracing curiosity about a different opinion and understanding that can help make more informed decisions. A trusted space to publish and engage in conversation with those who value discourse is essential to what business leaders need.

We believe that the world of business and technology today is filled with fake experts, often confusing the narrative and, ultimately, decision-making. At ON_Discourse, we bring together practitioners to share perspectives about the work they do day in and day out. They share an understanding that only comes from being on the frontlines of technology. And these people also possess the humility to admit what they know and don’t know. 

We believe a members-only media company relieves us of the problem of prioritizing the wrong KPIs that often plague modern media companies. We don’t want to be focused on the volume of content, clickbait, or a NASCAR-style approach to ads on a page. This economic model allows us to focus on the value of our mission.

ON_Discourse launch event "A Symphony of Disruption" at Fotografiska in New York City

We will provide content that makes you question and helps inform action. We avoid predictable platitudes, the 100th similar take that’s already been written about the latest hot topic, self-help, or motivational essays. 

Whenever we publish something or host a live discussion, we ask ourselves how the content will provide decision-makers, with busy schedules and mission-critical projects, with information that they can use to go directly into meetings and negotiations feeling informed and a step ahead.

These are

our Values

  • Curiosity to go Deeper: We know that very few things are worthy of your time. We publish high-quality, high-impact work that elevates public discourse and provides our readers with unique insights. Volume is not our focus, value and quality are.
  • Diversity of Perspective: We champion intellectual honesty and the courage to acknowledge our own limitations in the pursuit of knowledge. We help our community challenge conventional thinking.
  • Empathy to Opposing Ideas: It's important that we're not pandering to our members and are open and honest in our approach to the subjects we cover. We should operate without fear or favor in analyzing and criticizing the issues we address. There can often be a "circle the wagons" approach in the technology space that we should avoid being part of. 
  • Disagreement is Encouraged: In order to offer true value, we need to be free of pressure from members or potential partners. We should be respectful and thoughtful in our approach. Challenge ideas, not people specifically. We expect that from our contributors as well.

We will deliver value through exclusive access to our digital content and exciting and engaging in-person events and experiences. We will bring together the best minds from the top levels of business and create opportunities for real discourse.

That’s the mission of ON_Discourse.

We’re excited to begin this journey with you.

You can explore our articles through our home page.
If you haven’t yet applied for membership you can do so here.