Media Companies

Shouldn’t Reject

Generative AI

They Should

Build Their Own

Matthieu Mingasson
Head of Design Transformation at Code and Theory
or
Disclosing AI Use
in Reporting: It's Futile
Michael Nunez
Michael Nunez is the editorial director of VentureBeat, where he leads the coverage of artificial intelligence and enterprise data.

The responses from the media industry to the explosion of generative AI have been sharply divided. We’ve seen fear over potential job loss, dire warnings over the potential for AI-generated misinformation, and some tepid statements about its positive potential for the industry.

Smart publishers will realize that there’s enormous potential. By acting quickly and boldly, these companies can find new ways to drive value and monetize this disruptive technology.

Globally, the AI in the media market is anticipated to hit $99.48 billion by 2030. There’s potential across the newsroom, from improving and scaling workflows to content management and personalization.

It’s easy to view disruptive technologies, like LLMs, with distrust. Many organizations are uncertain about where to begin.

Rather than fear generative AI, media companies should approach it as a springboard for business innovation. Rather than reject generative AIs, media companies should build their own.

By building their own LLMs, media companies can chart their own course in a rapidly changing landscape and help create the future of brand experiences.

Train it yourself

Commonly used AI-generated conversational services like ChatGPT and Google’s Bard have the incredible ability to mimic human language by assembling words based on a technique called “word embedding” that organizes words and sentences based on their semantic proximity. This technique produces compelling and accurate responses when the subject matter is largely known by the LLM. But when concepts or information are missing in the corpus used by the LLM, GPTs fill the gap with fabricated, approximate answers that can be plain wrong in some cases. We call those answers “hallucinations.” 

There are other drawbacks to these models. ChatGPT, for example, isn’t a dependable or exhaustive source. The system is trained on data up to 2021, which means that companies who rely on real-time information, like the media, will be working with outdated information. The system is trained on a range of internet text data that can include biased data and misinformation. Filters aren’t robust enough yet to identify inappropriate content.

This is very much a challenge for media organizations that deal with facts, real time data, news, and implies that every single word provided by a GPT must be verified by a human.

So in this context the question is: How can media companies leverage generative AI technology to accelerate content identification, production and distribution, while maintaining competitive advantage against search engines? 

At a high level, news platforms produce a range of content that can be mapped against a spectrum that goes from “pure fact” (weather, stocks, sport results, for example) to “pure stories” (political op-ed, interviews, critiques, etc.) and includes anything in between. Search engines have long won the battle of distributing pure facts directly to your mobile so audiences often don’t even need to visit their news websites for that information.

Publishers remain for now the true owners of interesting stories, authorship, passionate debate, and opinions. But what the new generation of LLM/GPT-based search engines and conversational bots seem to be doing is climbing up the spectrum of information from pure fact towards “human-sounding” stories, due to their ability to mimic human language. 

Thus, a new competitive landscape is appearing where search engines are no longer limited to deliver weather and stock prices.

In the light of this new competitive landscape, media organizations cannot wait for the tectonic shift to happen. They must begin to train themself today with generative AI, even if it’s imperfect, unreliable, and untrustworthy, and build muscles with simple AI-powered workflows for newsrooms, train their teams to use it, get smart on how to train their own LLMs and automate content creation and distribution in lower-risk categories, while keeping search engines at bay.

Using generative AI to customize your content

Like other forms of AI, LLMs can be adapted and customized to suit a specific domain and use case. The media industry is already built around creating and curating content for audiences. Generative AI is simply another tool to facilitate this.

Media companies can develop their own LLMs to augment their brand voice and enhance storytelling. But they have to focus on credibility and the authorship of their content. 

Media organizations need to embrace AI now so they can learn how to swim in shallow water. When the ocean comes, they will be ready.

Existing open-source LLMs already provide an advantage for companies looking to utilize generative AI. A media company can customize an existing foundational model – one where a great deal of development has already been achieved – by training the LLMs on proprietary, internal data in a secure environment. The result is a “fine-tuned” LLM that is purpose-built for the specific use case of the media organization.

Creating Future Brand Experiences with Generative AI

Whether companies are ready or not, the future of the media revolves around generative AI.

Bloomberg is leveraging freely accessible, off-the-shelf AI techniques and applying them to its substantial repository of proprietary data. Bloomberg GPT, as its technology is dubbed, is built using the same foundational technology as OpenAI’s GPT.

The Bloomberg GPT model is trained on non-financial sources across the web, like YouTube subtitles, combined with 100 billion words from datasets that their financial firm has accumulated over twenty years. 

This addition of Bloomberg’s unique training data improved performance and accuracy for financial tasks to such a degree that the company intends to integrate Bloomberg GPT within various services and features.

Media companies should take inspiration from early LLM and generative AI adopters when approaching their own generative AI strategies.

In this rapidly unfolding landscape, media companies can set a bold standard for industry innovation by iterating upon the same powerful generative AI tools that large enterprise companies are already capitalizing on.

News agencies can use generative AI for data analysis and content development according to user preferences and trends. Music production companies can use generative AI for music composition and mastering based on a user’s mood or preferred genre. The potential of LLMs and Generative AI to transform the future brand experience are abundant.

If organizations can navigate the pitfalls of generative AI – like questions around accuracy, bias, trust, authorship, data, and brand experience – they can position themselves for both scale and innovation.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

Tony Iliakostas
Adjunct professor of Entertainment Law and IP at New York Law School
Did Tom Brady Really Say That?

During a stand-up comedy routine in late March 2023, former NFL MVP and seven-time Super Bowl champion Tom Brady addressed the question that was probably on a lot of people’s minds since he finalized his divorce from his ex-wife Gisele Bundchen in October 2022,   

The answer is yes, I’m still having sex with supermodels. That’s never going to change.

He went on to talk about his playing career, defecating, World War 3, and much more. But here’s the thing: Tom Brady never said these things.  

Tom Brady’s “comedy routine” is the product of Will Sasso and Chad Kultgen, the hosts of the podcast Dudesy, who used generative artificial intelligence to create a one-hour-long comedy routine that mimics the voice of NFL great Tom Brady.

This comedy routine was available as a perk on Patreon to Dudesy subscribers, but the comedy bit quickly proliferated on the internet. Other podcasts, including the Pat McAfee Show, commented on how life-like and realistic Tom Brady’s comedy routine was.

Unfortunately, this came at a cost; TMZ reported that Sasso and Kultgen received a cease-and-desist letter from Brady’s attorneys, alleging that Dudesy infringed on Tom Brady’s personality rights, namely the use of his voice, to create this comedy special.

According to Sasso and Kultgen, the same cease-and-desist letter threatened a personality rights infringement lawsuit and demanded that the fake comedy special be removed from all social media platforms and anywhere the comedy routine was disseminated. Sasso and Kultgen complied.

The Broader IP

In April 2023, the German tabloid magazine Die Aktuelle published an interview with famed F1 racer Michael Schumacher. According to the tabloid, this was the first time Michael Schumacher publicly spoke to the media since suffering a near-fatal brain injury in December 2013 while skiing. In the article with Die Aktuelle, Schumacher commented on his skiing accident that led him to step away from F1 racing. However, at the conclusion of the article, the tabloid magazine disclosed that this interview with Michael Schumacher was, in fact, completely AI-generated. Michael Schumacher’s family was extremely upset, and rightfully so, and planned to pursue legal action against the publication. The editor in chief of the magazine was fired.

Generative AI platforms like ChatGPT, Midjourney, and Dall-E have proliferated at an exponential rate. As of January 2023, OpenAI reported nearly 100 million users signed up for ChatGPT since its public launch in November 2022. Dall-E reached 1 million users after 2.5 months. According to Statista, the adoption rate of generative AI in the United States in the workplace is averaging 28% across Gen X, Millennials, and Gen Z. There’s also no denying how innovative generative AI technology truly is. You can go into Chat GPT and ask the system to write a poem about what a great basketball player Michael Jordan was and the AI will do all the work for you. AI systems can generate artwork of things we could never conceive of in real life. But while the creative potential of generative AI is clear, there are also plenty of risks and unanswered questions. There is a great deal of legal ambiguity that stems from AI creation and use.

This isn’t the first time though that generative AI has gotten into hot water with an athlete’s personality rights.

and Constitutional Implications

Most legal issues involving generative AI stem from copyright law. In a recent decision, the US Copyright Office made it clear that AI-generated works, including AI-generated artwork, will not receive any copyright protection because AI-generated works lack the human authorship required for protection under copyright. But perhaps one of generative AI’s biggest foes is the personality rights law.  

Unlike copyright, trademark, patent, trade dress, and trade secret law, personality rights are the only area of intellectual property law that is not regulated at the federal level.  Currently, there are only 25 states that have adopted personality rights legislation, also known as the “right of publicity.”.  Other states adopt a common law standard.  Regardless, the standard is the same across the board: personality rights involve the use of one’s name, image, and likeness for commercial use. If anyone uses someone’s personality rights for commercial use without that person’s consent, that qualifies as a personality right infringement. One of the cornerstone personality rights infringement cases is the iconic 9th Circuit decision of Midler v. Ford Motor Co 

In this case, Ford approached Bette Midler and her agent for permission to use her song “Do You Want to Dance” in a commercial campaign for Lincoln Mercury cars.  Bette Midler denied permission so instead, Ford hired Midler’s backup singer Ula Hedwig to sing the song like Bette Midler. The commercial went live and Midler sued Ford, arguing that recreating her voice was a direct infringement of her personality rights under the California Celebrities Act. The 9th Circuit sided with Midler, arguing that use of her voice (albeit a recreated version) to solicit car sales was an infringement of her personality rights. The 9th Circuit further added that Midler’s voice is as much a part of her identity as her actual name, image, and likeness. 

of Generative AI

Bringing this back to generative AI, it’s evident that using AI to recreate one’s personality rights, including one’s voice, can create serious legal issues. Cases like Midler v. Ford serve as a warning signal that using AI to imitate a celebrity or athlete is an infringement-worthy activity if you are using that material for commercial purposes, like an advertisement. And then you have situations like the Brady-AI comedy routine, where Tom Brady’s voice was recreated to make crude jokes. While some legal critics may argue that such activity infringes on Brady’s personality rights, a majority, including myself, may argue that such activity is freedom of expression that is protected under the First Amendment. Parodying a public figure is something that the US Supreme Court has regarded as a fully protectable interest under the Constitution. Essentially, Dudesy’s Tom Brady AI-generated comedy routine is no different than SNL doing a parody of Donald Trump or Joe Biden if the whole point is to parody their personas. That type of commentary is an entirely protectable interest under the First Amendment.

But what about Michael Schumacher’s fake interview with a German tabloid magazine? I would lean towards saying that, in the United States, such behavior is defamatory. Defamation is a cause of action that falls under the First Amendment of the US Constitution and it occurs when someone makes a false statement about someone else and damages their reputation. For public figures, they have the burden of proving actual malice, meaning that the offender knew that the statement was false and with reckless disregard still made the defamatory statement. While the Schumacher interview would be governed under German law, it would be hard to dismiss that such behavior here in the United States meets the threshold of defamation. Imagine if a newspaper or news outlet framed an AI-generated interview with an athlete as a real one that paints said athlete in a bad light. Not only is there an ethical dilemma, but such behavior triggers a cause of action of defamation.

It’s evident that using AI to recreate one’s personality rights, including one’s voice, can create serious legal issues.

Use Among Athletes

On the contrary, I think the innovation of generative AI opens the door wide open to sparking creativity and ideas. As it pertains to the sports industry, generative AI presents a plethora of great opportunities. However, there are more questions than answers concerning artificial intelligence and its place in the legal field. The lack of regulation on the local and federal levels also doesn’t help the matter. At the end of the day, we must regard the AI landscape as the Wild West. It is a poorly regulated terrain, but I think prior case law and any impression of existing statutory language give us a roadmap of how to approach regulating this budding industry. In the meantime, athletes should be cautious of working with any brands/entities using generative AI and should ensure that they are legally protected by way of a formal contract.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

More

The Future is Fan Controlled

The Future of Golf Will Be Simulated

Game Over? The Uncertain Future of Live Sports

The Death of Live Sports is Greatly Exaggerated

Live Sport Needs to Embrace the Realities of Reality TV

Tech: The Uncanny Valley of Fan Experience

The Fan Experience Event Preview

LIV Golf’s Playbook: Innovating Without Losing Traditional Fans

Home Field Advantage: The Community Experience is Paramount

Fans Are The New Free Agent

Did Tom Brady Really Say That?

An NBA Player Surveys the AI Opportunity

AI: A Goldmine or a Landmine For Athlete Brands?

Intellectual Property Law and AI: The Future of Athlete Branding

Spencer
Dinwiddie

Spencer Dinwiddie
NBA Athlete and Entrepreneur

The first line of my bio reads "NBA player," but I always tell people I’m a tech guy with a jumper. My fascination with emerging technology was the prime motivating factor for co-founding Calaxy, a new platform empowering Creators to connect with their fans and monetize their brands easier than ever before.

That fascination with new tech is also why I jumped at the opportunity to invest in Genies and explore the worlds of AI, blockchain, and virtual and augmented realities.

Recently I fell down a rabbit hole thinking about the advent potential of AI-generated avatars and conversational bots and how they might be able to help revolutionize fan interaction.

Before this technological era, a person couldn’t be in two places at once or appear to come back from the dead. But this technology — which seemed so novel and impossible to fathom — is now right at our fingertips.

The application of AI-generated avatars seems obvious, now. It could allow me — or anyone else that has so many demands on their time that they have to say no to things they’d like to be doing — to work in new ways. I could take video footage, photos of myself, words I’ve authored, and other content I’ve generated and use these materials to help train AI models in the hopes that they can extrapolate my personality and then appear and interact on my behalf, sort of like digital clones who works for me.

Surveys the

Anyone can see the benefits of this. People who want to interact with me but otherwise wouldn’t have that access are suddenly able to. And from my end, I’m able to say yes to things that otherwise wouldn’t have fit into my schedule.

It’s a game-changer.

And because AI models can evolve, digital representations of my personality could learn and grow over time.

This digital version of me would also be more knowledgeable about the world because it would be digesting information at a rate impossible for humans to keep up with. So it would pretty quickly become a smarter and wittier version of myself, if that’s something you can imagine.

But, AI-generated avatars could have some potentially terrifying results as well. With this tech, deep fakes are a real concern, especially as the line between what’s real and what isn’t gets blurrier and blurrier over time. And what happens if my AI-generated avatar is interacting with a fan and says something horrible? The backlash for those words or actions will land at my feet and impact my brand, not some AI-generated version of me. The avatar doesn’t have a reputation to worry about, but I do.

We need to figure out how to use this technology responsibly and ethically.

AI Opportunity

There is some suggestion that blockchains and other immutable technologies could potentially help mitigate some of these concerns because they can ensure credibility and verify the authenticity of content, but we’re not going to be able to put the genie back in the bottle, no pun intended.

Another challenge is the lack of regulatory clarity. AI is advancing so quickly that it’s tough to keep up with the rules. We need to figure out how to use this technology responsibly and ethically. It’s a learning process, and we have to be careful not to cross any boundaries that we’ll regret later.

And let’s not forget the bigger picture. AI, if not properly controlled, could become a serious threat to humanity. We’ve all seen the Terminator movies and I’m not trying to be John Connor, so we’re going to need to be cautious and establish safeguards to prevent any unintended consequences. We don’t want to unleash something that we can’t handle.

We just have to remember that we’re still in the early stages of generative AI. If this were a basketball game, there’d be 20 seconds left on the shot clock of the first possession of the first quarter. There’s a ton of game left to play and we’re just scratching the surface of what this technology will ultimately do. It’s an exciting time, but we need to approach it with an open mind. We’ll learn as we go, and it’s important to keep evaluating, adapting, and having conversations about the best ways to use this technology…like enabling me to be swimming at the beach in Cannes and on-stage delivering a keynote address simultaneously.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

AI Goldmines and Landmines

Did Tom Brady Really Say That?

Tony Iliakostas
Adjunct professor of Entertainment Law and Intellectual Property at New York Law School

The innovation of generative AI opens the door wide open to sparking creativity and ideas. As it pertains to the sports industry, generative AI presents a plethora of great opportunities. However, there are more questions than answers concerning artificial intelligence and its place in the legal field.

and

Is AI a Goldmine or a Landmine For Athlete Brands?

Toby Daniels
Co Founder On_Discourse

This new age might be a goldmine for athletes, but they’ll have to avoid the landmines first.

AI’s potential role in enhancing personal branding in sports cannot be overstated.

Is AI a Goldmine

or a Landmine

For Athlete

Brands?

Toby Daniels
Co founder ON_Discourse

You're a sports fan in 2028.

You log into your fitness app and a chat pops up -- it’s an AI workout assistant that looks and talks like your favorite NBA star. You input your training goals for this week and get back a personalized diet and training plan based on your needs.

You pop on your VR headset and fire up Madden 2028 -- you scroll through a roster of thousands of hyper-realistic AI-generated players from throughout time and create your team. You head onto the field.


Later, you turn on SportsCenter and watch as Stephen A. Smith debates an AI-generated young Charles Barkley. They’re arguing over who’s wearing the sharper suit, which you can buy using your Apple Cash directly from your headset.

AI – with its seemingly endless ability to create, analyze, and mimic – is transforming industries at a breakneck pace. Athletes are uniquely positioned to capitalize on this tech to reimagine sports branding. 

Athletes, bound by the constraints of time and resources, now have the potential to leverage their likeness and scale their brands in innovative ways to engage fans.

AI also poses considerable risk. It’s uncharted territory with pitfalls that range from unauthorized deepfakes to AI-generated communication that fans find inauthentic.

This new age might be a goldmine for athletes, but they’ll have to avoid the landmines first.

AI’s potential role in enhancing personal branding in sports cannot be overstated.

Already, AI-driven analytics can synthesize vast amounts of data about fans' behaviors and preferences, allowing athletes to tailor their brands to better target individuals. 

Personalized fan experiences, from AI-curated content to virtual meet-and-greets, are poised to redefine the fan-athlete dynamic, creating a stronger and more direct connection. Generative AI models could be trained to replicate an athlete’s voice and tone. These models could then be used to create unique content in the style of the athlete, which could then be targeted at fans who are most interested in the topic. 

The economies of scale enabled by automation, including aspects of content creation, also enable athletes with smaller followings to boost their brands and reach more fans. This shifts the power dynamics within the sports industry, placing control back into the hands of the athletes.

But these tools are not without risk; there is also the potential for AI to erode the value of an athlete’s brand. Automated content, while efficient, lacks the genuine human touch and authenticity that fans often seek. The uniformity resulting from AI processes might also lead to diluted brand identity, reducing differentiation and competitive edge.

Equally, the potential for misuse is significant.

Deepfakes, AI-manipulated images or videos that often appear authentic, present a risk even today. As the technology improves – and the ease of creating convincing synthetic media rapidly increases – public figures will have to reckon with the potential consequences of false narratives being planted by fake versions of themselves. The current legal framework, predominantly designed for a pre-AI era, struggles to tackle these novel challenges.

Image rights, the linchpin of an athlete’s brand, face an unprecedented threat with the advent of AI. The ability of AI to create lifelike digital personas of athletes, and use them in a myriad of contexts, raises complex issues surrounding consent and ownership. An entirely new framework for licensing athlete likenesses – and for objecting to the use of unlicensed, AI-generated likenesses – is needed. 

The Threat to Brand Ownership and Authenticity

Need for Regulation

Contracts and the legal framework need to evolve to address the challenges posed by AI, protecting athletes' image rights and preventing misuse. Transparency and ethical considerations must guide the deployment of AI in sports branding, ensuring it enhances rather than detracts from the athlete’s brand value.

The emerging age of AI offers a wealth of opportunity and a chance to redefine the athlete-fan relationship. In the delicate balance between scaling and protecting an athlete’s brand, AI represents both a goldmine and a landmine. As we chart this new terrain, the challenge lies in unlocking the promise of AI while safeguarding against its perils.

Are these fears going to stifle creativity, innovation, and commercial growth? Humans have proven themselves eager to jump at hyped, if unproven tech, that promises financial gain. Just look at crypto! Will athletes fall into the same trap and pivot too quickly to using AI to redefine their personal brands? Maybe? Probably. 

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

AI Goldmines and Landmines

An NBA Player Surveys the AI Opportunity

Spencer Dinwiddie
NBA Athlete and Entrepreneur

Recently I fell down a rabbit hole thinking about the advent potential of AI-generated avatars and conversational bots and how they might be able to help revolutionize fan interaction.

Before this technological era, a person couldn’t be in two places at once or appear to come back from the dead. But this technology — which seemed so novel and impossible to fathom — is now right at our fingertips.

and

Did Tom Brady Really Say That?

Tony Iliakostas
Adjunct professor of Entertainment Law and Intellectual Property at New York Law School

Athletes, bound by the constraints of time and resources, now have the potential to leverage their likeness and scale their brands in innovative ways to engage fan.

But AI also poses considerable risk. It’s uncharted territory with pitfalls that range from unauthorized deepfakes to AI-generated communication that fans find inauthentic.

Do You

Want to

Change

Your

MIND?

Dan Gardner
Co-Founder & Exec Chair, Code and Theory
Co-Founder, ON_Discourse

When was the last time you felt like you had your mind changed?

When you engaged in a discussion with someone you respect and who has the experience to speak knowledgeably about a topic but had a completely different point of view than your own?

Imagine a space where you would find yourself thinking differently about the most important decisions you make.

As the founder and chairman of a creative and technology-driven consulting business with almost 2,000 employees, I think creating space to be challenged and to be able to challenge others' perspectives is essential to business success.

So why a member’s media company?

We believe “discourse” has become a bad word in media. In practice, it’s been used as a form of trolling or to express negativity around someone’s opinion, instead of embracing curiosity about a different opinion and understanding that can help make more informed decisions. A trusted space to publish and engage in conversation with those who value discourse is essential to what business leaders need.

We believe that the world of business and technology today is filled with fake experts, often confusing the narrative and, ultimately, decision-making. At ON_Discourse, we bring together practitioners to share perspectives about the work they do day in and day out. They share an understanding that only comes from being on the frontlines of technology. And these people also possess the humility to admit what they know and don’t know. 

We believe a members-only media company relieves us of the problem of prioritizing the wrong KPIs that often plague modern media companies. We don’t want to be focused on the volume of content, clickbait, or a NASCAR-style approach to ads on a page. This economic model allows us to focus on the value of our mission.

ON_Discourse launch event "A Symphony of Disruption" at Fotografiska in New York City

We will provide content that makes you question and helps inform action. We avoid predictable platitudes, the 100th similar take that’s already been written about the latest hot topic, self-help, or motivational essays. 

Whenever we publish something or host a live discussion, we ask ourselves how the content will provide decision-makers, with busy schedules and mission-critical projects, with information that they can use to go directly into meetings and negotiations feeling informed and a step ahead.

These are

our Values

  • Curiosity to go Deeper: We know that very few things are worthy of your time. We publish high-quality, high-impact work that elevates public discourse and provides our readers with unique insights. Volume is not our focus, value and quality are.
  • Diversity of Perspective: We champion intellectual honesty and the courage to acknowledge our own limitations in the pursuit of knowledge. We help our community challenge conventional thinking.
  • Empathy to Opposing Ideas: It's important that we're not pandering to our members and are open and honest in our approach to the subjects we cover. We should operate without fear or favor in analyzing and criticizing the issues we address. There can often be a "circle the wagons" approach in the technology space that we should avoid being part of. 
  • Disagreement is Encouraged: In order to offer true value, we need to be free of pressure from members or potential partners. We should be respectful and thoughtful in our approach. Challenge ideas, not people specifically. We expect that from our contributors as well.

We will deliver value through exclusive access to our digital content and exciting and engaging in-person events and experiences. We will bring together the best minds from the top levels of business and create opportunities for real discourse.

That’s the mission of ON_Discourse.

We’re excited to begin this journey with you.

You can explore our articles through our home page.
If you haven’t yet applied for membership you can do so here.

Sorry, Everybody Can't Be a Director

Dan Gardner
Co-Founder & Exec Chair,
Code and Theory
Co-Founder, ON_Discourse
or
The Shift from Knowledge
Work to Direction Work
Toby Daniels
Founder, ON_Discourse, former Chief Innovation Officer, Adweek, Founder and Chair, Social Media Week

INT. MODERN CONDO - MORNING

It’s 2027, but it looks like today. ANTHONY gets out of bed and sits in front of his COMPUTER–

This is no ordinary computer; there is no interface, no folders, no zoom calls, and no meetings needed. He’s staring at his one sole text field called ‘the prompt box.’ He says to himself –

“Get to work”

Anthony starts to submit directional commands.  Prompt after prompt. He gets a sense of fulfillment knowing the robot will do his every command. 

THEN the Camera pulls out of the window of the 

EXT. CONDO BUILDING

Moving further and further away, we reveal even MORE CONDO BUILDINGS. Inside, we see OTHER PEOPLE sitting in front of their computers with the prompt box.

The Camera flies back into

INT. MODERN CONDO

The camera flies all the way inside to a close-up of Anthony’s face. He turns to look into the next room. His wife PAM is sitting at her prompt box doing the exact same thing. It looks like she’s talking to herself.

ANTHONY

Pam! Luckily we changed our majors in college back in 2023. We are now so prepared for this prompt box. Johnny and Michelle must be screwed!

Anthony commands his BARD and audio via voice application.

ANTHONY

Hey Bard, play today’s interesting business stories.

We hear a familiar voice… It sounds like SNOOP DOGG.

SNOOP DOGG

New study says…

But we quickly realize that this is not Snoop, but rather a DEEP FAKE –

SNOOP DOGG

…millions of jobs were saved by the early predictions of jobs shifting from  “knowledge workers” to “direction workers”

Another deep fake voice chimes in. This time it’s Al Pacino –

AL PACINO

Yes, It’s amazing how knowledge is not important anymore, but unfortunately, companies still can’t hire enough direction workers and it’s causing salaries to increase at a rapid pace

The rapid ascent of generative AI, automation, and technologies that boost creative output has caused speculation and fear that we’re on the precipice of a massive industry shift away from knowledge workers. The breakneck pace of change has young people wondering how to best prepare themselves for an unpredictable world – Should I study computer science? Is coding obsolete? – and employers grasping for how to hire for the skillsets of the future.

My answer is, Don't be so dramatic.

The rate of advancement in generative AI is so extreme that we are all trying to understand real-time the implications and guess what the ripple effects might be. It’s given everyone in every single industry collective whiplash. 

And it’s resulted in over-the-top projections and calls for overcorrections, like a total shift in how we approach educating the future workforce and hiring for skills necessary for success. But both traditional knowledge workers and “direction workers”-- those who direct and instruct the technologies of the future – will always be necessary. Creativity and success aren’t possible without both.

To clarify what I mean by “direction workers,” I’m referring to managers who not only lead teams but also primarily direct or instruct humans or technologies in various, often creative, outputs and tasks. If creative work is nearly wholly produced by generative AI, the story goes, those creative “doer” jobs will disappear in favor of jobs directing the tech.

Video Killed the Radio Star

The Buggles

But before we write off creativity and knowledge workers as superfluous, remember this is not our first industrial revolution — in fact, it’s one of many. So it’s essential to have some perspective on where we’ve been, where we are presently, and where we’re heading as a society. In every instance in the past where we saw dramatic changes in manufacturing, technology, and behaviors, one thing remained constant: passion, drive, creativity, and knowledge are the elements that drive innovation.

There are still thriving musical artists despite the advent of MTV. And music artists AND music videos are still thriving despite the advent of YouTube. Then there were still those artists, and record labels, despite the advent of MP3s, and so on, and so on.  Did job types reshuffle because of new processes,  business, and distribution models? Of course. But the industry didn’t vanish – it just evolved. Old jobs were gone, and new jobs were created.

Education is what remains after one has forgotten what one has learned in school.

Albert Einstein

It is not a new concept to suggest our educational system is outdated.  The emergence of AI may have shined a spotlight on this fact, but it sure isn’t the cause. We have basically had the same classroom teaching style for the last century, despite all that has changed around us.

The traditional education system has long emphasized memorization and rote learning, which is ineffective in a world where information is readily available at our fingertips. Students are often taught to regurgitate facts rather than develop critical thinking skills, problem-solving abilities, and adaptability — all of which are crucial no matter which discipline you choose. 

An updated education system should embrace interdisciplinary approaches, encouraging students to explore the intersections of different fields. This will enable them to connect the dots and develop a holistic understanding of complex issues, fostering innovation and adaptability regardless of where AI takes us. Hypothesizing on where AI is going in order to inform your educational choices today is ridiculous and should not be the point of education. 

A small reminder that if you’re 40-something years old or older, you somehow went through all of high school without a computer as a primary focus, and even in your college days, the computer offerings were likely rudimentary at best. And yet many have gone on to have long-standing careers focused on the web, mobile computing, and social media. How was that possible if they didn’t teach it in the ‘90s?

The greater danger for most of us lies not in setting our aim too high and falling short, but in setting our aim too low and achieving our mark.

Michaelangelo

Another issue with the push for AI and automation tech is that so many voices chiming in on this topic are overly fixated on the merits of staff reduction based on the output possible from artificial intelligence. Voices are asking, “Why does my business need ten people when I can have five people, or maybe better, just one person doing the job of the whole department?”

But if you and your competition are so intent on reduction, where does differentiation start? It doesn’t matter whether you majored in art or in engineering — we’re all aware that multiplication builds, while subtraction takes away. 

So if one company is multiplying and the other is subtracting, who may have the better outcome? What are companies adding to the discussion, the organization, and the culture at large, when they’re so focused on reduction? 

We are at a pivotal moment right now. That is undeniable. 

Everybody, across every level of an organization, is going to work with AI in the future, the same way the computer is essential today. Yes, some jobs will go away, while new ones are created. However, the types of careers that will emerge as a result of AI will differ from the ones that are being displaced. 

or
The Shift from Knowledge
Work to Direction Work
Toby Daniels
Founder, ON_Discourse, former Chief Innovation Officer, Adweek, Founder and Chair, Social Media Week

Nevertheless, occupations that rely on human skills such as problem-solving, creativity, and empathy are less susceptible to being replaced by machines in the immediate future. As AI continues to advance, these roles will also probably experience some impact. The encouraging aspect is that AI will augment, and potentially multiply the value and output of,  these occupations, propelling problem-solving, creativity, and empathy to unprecedented levels and generating fresh opportunities like never before.

If you take the implication of direction work to its most extreme conclusion, you can imagine the story above playing out where tens of millions of white-collar workers are all sitting in front of their computers, much like today. And, as the story goes, with these computers having just that one field. And the direction worker’s job is just to keep telling some artificial intelligence what to do: And now, computer, write this. Now, computer, create this – on and on. That’s a preposterous idea because there’s simply not enough direction that will be needed for that to happen. 

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

ON_AI

AI

Doomerism

is a

Business

Tactic

Anthony DeRosa
Head of Content and Product,
ON_Discourse


Recent warnings from tech leaders that AI could lead to the extinction of humanity have raised eyebrows and reminded me of rhetoric that supports the US military-industrial complex. The apocalyptic scenarios put forth by AI industry titans like Sam Altman may be giving birth to an “AI-industrial complex,” driven by hyperbole rather than a rational evaluation of AI’s potential destructive power.

A Familiar Narrative

The parallels between the AI extinction talk and the fear-mongering that powers the military-industrial complex are striking. The military-industrial complex refers to the close relationship between the defense industry, the military, and the government. It suggests that these entities have a vested interest in perpetuating a state of war or the fear of external threats in order to maintain their power and profitability. By exaggerating or fabricating dangers, those in the military or defense industry can justify increased military spending, weapon development, and the expansion of military influence.

Similarly, fear-mongering about the risks of AI may amplify or embellish the potential danger of the technology. While it is important to acknowledge and address the ethical and safety concerns surrounding AI, over-the-top speculation can lead to exaggerated narratives that overshadow its potential benefits and hinder progress in the field. It can also shape public opinion and policy decisions, potentially resulting in restrictive regulations or unnecessary limitations on AI development.

The Emergence of an “AI-Industrial Complex”

The concept of an AI-industrial complex refers to the amalgamation of influential entities, including corporations, government agencies, and media outlets, that profit from fear and exaggeration surrounding AI’s potential dangers. This complex capitalizes on the public’s fascination with doomsday scenarios and, ironically, fuels the demand for AI-related products, services, and research by depicting these technologies as all-powerful.

The calls for regulation in the face of this supposedly extinction-level threat are hollow. Regulatory oversight often favors incumbents who can leverage their money and power to push for oversight that is more favorable to them, so it makes sense that many giants in the AI space are now calling for legislators to take a closer look. 

It’s imperative to question the motivations behind such rhetoric and consider whether it serves the best interests of society or simply acts as a vehicle for self-interest.

Fostering a Balanced Dialogue

The dangers of AI should not be disregarded, as responsible discussions around ethics, privacy, job displacement, and algorithmic bias are crucial. However, it is equally important to maintain a balanced dialogue that separates legitimate concerns from alarmist speculation. Painting all AI advancements with a broad brush of impending doom stifles innovation and instills unnecessary fear in the public.

To avoid falling into the trap of an
AI-industrial complex, we must encourage critical thinking, evidence-based analysis, and multidisciplinary collaborations. Thought leaders, policymakers, and the media should prioritize objective assessments of AI’s risks and benefits.

Drawing parallels between the AI extinction talk and the military-industrial complex should serve as a reminder to exercise caution and skepticism in the face of hyperbolic scenarios. In both cases, there is a potential for vested interests to exploit and manipulate public fear for their own gain. The military-industrial complex thrives on the perpetuation of fear to maintain its influence, while fear-mongering about AI risks can serve the interests of individuals or organizations seeking to control or shape the development of AI technologies and their regulation.

By drawing this parallel, we can recognize the potential for fear-based narratives to shape public opinion and policy decisions in both the military-industrial complex and the AI domain. It highlights the importance of critical thinking, transparency, and ethical considerations in navigating these complex issues.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

MORE
ON_AI

Michael Nuñez
Editorial director of VentureBeat
Disclosing the use of AI in reporting:
It's Futile

Across newsrooms, journalists are grappling with an unstoppable forcegenerative AI tools that are quickly reshaping how stories are reported, written, and edited.

In an editorial memo to our staff, I recently shared my vision for using generative AI as a creative partner, not as a replacement for human judgment. But much is at stake amid this enormous swell of innovation.

Tools like ChatGPT, Bing Chat, and Google Bard represent a massive paradigm shift for our profession. These large language models, trained on humanity’s cumulative knowledge, have the potential to dramatically expand journalists' storytelling abilities and productivity.

But as we embrace these innovations, we also face a dilemma: How do we balance the benefits of AI with the core values of journalism? How do we ensure that our stories are accurate, fair, and ethical, while also taking advantage of the speed and efficiency of AI? How do we preserve the human element of journalism—the empathy and judgment that make our stories meaningful and trustworthy—while also leveraging the power and potential of AI?

These are some of the questions that we at VentureBeat are wrestling with every single day. As one of the leading authorities in AI news coverage, we are at the forefront of exploring and experimenting with generative AI technologies in journalism. We are also committed to sharing our insights and best practices with our peers and our audience.

At VentureBeat, we’ve created several guardrails for the use of AI in the newsroom. We allow our editors and reporters to use tools like ChatGPT, Claude, and Bing Chat to suggest article topics, story angles, and facts — but journalists are required to verify all information through traditional research. We do not publish articles solely written by AI models.

We’ve found that artificial intelligence can be regularly used in a few mission critical ways:

  • Story ideation: We often use tools like ChatGPT, Claude, and Bing Chat to brainstorm potential topics and angles for our stories, based on our areas of coverage and current trends. We also ask these tools to generate summaries, outlines, or questions for our stories, which help us structure our research and writing.
  • Headlines: We use tools like ChatGPT, Claude, and Bing Chat to generate attention-grabbing and informative headlines for our stories, based on the main takeaways and keywords. We also ask language models to suggest alternative headlines or variations, which help us optimize our headlines for SEO and social media.
  • Copyediting: We use tools like ChatGPT, Claude, and Bing Chat to proofread and edit first drafts of our articles, checking for grammar, spelling, style, and tone. We ask language models to rewrite sentences, paragraphs, or sections of our articles, to improve clarity, coherence, or creativity. Our human editors always review, edit and fact-check VentureBeat stories before publishing.
  • Research: We use AI to assist us with our research, by providing relevant facts, sources, quotes, or data for our stories. We also ask these tools to summarize or analyze information from various sources, such as web pages, reports, or social media.

By now, you might be wondering how we handle disclosures.

At VentureBeat, our policy is to disclose the use of AI only when it is relevant to the story or the reader’s understanding. Otherwise, we treat AI as any other tool that we use in our daily work, such as Microsoft Word, Google Search, or the grammar and word-choice suggestions made by Grammarly. We do not believe that singling out one tool over another adds any value for our readers. What matters are the core values of accuracy, objectivity, and ethical use of information that guide our craft — values that we uphold rigorously regardless of the technologies involved.

There is no one-size-fits-all solution for how to use AI in journalism. Each news organization has its own mission, vision, and standards that will guide its editorial decisions; each journalist has his or her own style, voice, and perspective that inform their storytelling; and each story has its own context, purpose, and audience that determine its format and tone.

But there are some principles that have been foundational to great journalism for centuries, and that will continue to be relevant in the age of AI. These principles can help us strike a balance between innovation and tradition and between automation and humanization.

or
Newsrooms Should Build
Their Own Generative AI
Matthieu Mingasson
Head of Design Transformation at Code and Theory

One principle is always reporting the truth. Our journalistic standards and values must be enhanced, not compromised by AI. 

That means that our stories must be factual, fair, and balanced. Any data used in a story — whether it’s surfaced by Google Search, GPT-4, or any other software available— must be verifiable. 

The use of large language models does raise some thorny questions about the data that powers the AI tools we’re using. As a journalist, I would, of course, like to know how the models are trained, what they can and cannot do, and what risks they pose.

We believe greater transparency into training datasets will be important as we forge ahead and confront this massive shift in the industry. Transparency in the datasets used to power LLMs will help us to understand how the models work, what assumptions they make, and what limitations they have. It will also help us monitor and audit the performance of models, to identify and correct any errors or biases, and ensure accountability and trust. We believe transparency in LLM research will be one of the defining issues of the year — and have covered it at length.

Another defining principle of how we use AI in the newsroom is accuracy.

We need to get the facts right. Without rigorously fact checking every detail, misinformation spreads. That means being careful and ethical about how and why we use AI in our reporting. Again, we do not rely on AI blindly or uncritically, but rather use it as a tool to augment our human skills and judgment.

For example, we recently wrote a story about how Amazon inadvertently revealed its plans to create an AI-powered chatbot for its online store. When we used ChatGPT to generate headlines for our story, it suggested inaccurate options that confused ChatGPT with a “hallucinated” (i.e. fabricated) conversational model that Amazon is trying to build in-house. It was just one of many examples we see on a daily basis that illustrates the need for human oversight and fact-checking when using such tools. Our final headline was written by a human: Amazon job listings hint at ChatGPT-like conversational AI for its online store.

There are many other instances where reliance on AI should be limited or avoided, such as in the interpretation of complex, nuanced information. While AI-generated summaries can help journalists quickly assess vast amounts of data, the technology may inadvertently omit critical details or misrepresent the original intent of the source material. In such cases, an experienced editor or reporter is necessary. We insist that all of our reporters review the source material for any story they write. We view this as part of the rigorous fact-checking required in modern-day journalism.

Our goal will never be to supplant or diminish our reporters or editors using AI — they’re the ones who make the stories happen. They are the heart and soul of our work. They go deep, challenge the status quo, and expose the truth. They give our readers the facts they need in order to make smarter decisions. Our journalists’ insight, perspective, and analysis is what makes them indispensable. We need them to use their judgment and bring that humanity to each of their stories in order to make us a leading source of news.

Finally, we are committed to accountability.

While AI can produce content at a massive scale, only humans can ensure quality. Articles produced by artificial intelligence are a draft, not finished work. Consumers rightly expect the media to get the details correct. We believe we will drive traffic and business with trust, not shortcuts. 

We treat text generated by AI as we have always treated news copy produced by humans. It is subject to rigorous editing and fact-checking before publication. We make corrections promptly and transparently when errors occur. If a story warrants disclosure of our methods to provide proper context, we will do so to maintain transparency. But the default is that we do not disclose the use of AI any more than we disclose the brand of word processing software upon which a story is written. What matters is the end result — a fair, unbiased, and truthful report. While others get lost in “AI-assisted” details, we continue to focus on breaking news.

While generative AI is an unstoppable force in many industries, how it transforms journalism over the next decade is up to those of us at the forefront — not legacy media, which will surely be slow to adopt. Those who embrace artificial intelligence wisely by supplementing reporters, not replacing them, will unlock new possibilities. Those who fall prey to AI’s hype risk dehumanizing and degrading their newsrooms.

The future of journalism lies in human empathy. Generative AI promises a productivity revolution, but journalists must steer that transformation in service of our profession’s higher purpose: delivering truthful, impactful stories that serve the public interest.

Our role has never been more crucial amid a flood of misinformation. Now we have an opportunity to shape AI to enhance, not erode, what makes journalism essential through this era of disruption. Doing so will require constant vigilance, ethical innovation, and a commitment to the values that undergird this work. The balance we strike will reverberate for decades. History watches our next move.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

VALUABLE CONNECTIONS
real discourse
PROVOCATIVE PERSPECTIVES

What?


Welcome to ON_Discourse, a new membership media company focused on the business of technology, raising the level of discourse with expert-driven perspectives.

Why are

you here?

We are Surrounded

by Fake-experts

Lacking Depth

in their Thinking

Ideas are

Trapped within

Conventional boundaries

Unintended

Consequences

in Tech Lead to

Costly Mistakes

in Business

The People

in our Industry

Often Sound

Look

Think

the Same

What to

expect:

Our number one value is we provide is new Perspectives.

To see different points of view. To have the ability to disagree productively. To understand the depth of emerging technology and its potential impact on Business. Business across all verticals such as entertainment, finance, marketing, CPG, health, climate, etc.

To hopefully save you bad decisions and recognize new opportunities.

And as a bonus, meet new people, and read some stuff along the way.

Member

               Benefits

For Executive

Industry Leaders

and Entrepreneurs

Attend ON_Partner Dinners

Access to gated live tapings of podcast

Receive ON_Discourse Newsletter

Access to Private Networking

Ability to Submit Ideas & Contribute

Attend ON_Co Events (Cannes, Money 20/20, etc.)

Attend ON_Virtual Events & Discussions

BLADE Airport + pass & discounted flights

Upcoming

Events

ON_Sports

at Cannes Sport Beach

June 19-23, 2023
La Mandala Beach, Cannes

ON_Sports at Cannes Sport Beach in partnership with Stagwell, will comprise of four days of discourse driven programming, a private, members-only dinner for C-suite leaders and athletes.

ON_Entertainment

in the Hamptons

August 8-9th, 2023
The Hamptons, NY

ON_Hamptons will feature a members-only reception, a private luncheon and an afternoon of discussions, featuring icons, entrepreneurs and investors in business, media, tech and entertainment.

ON_Finance

at Money 20/20

October 22-25
Las Vegas, NV

ON_Finance, which takes place during Money 20/20 in Las Vegas, and will
bring together leaders in finance, commerce, payments and financial services innovation.

Join

the

Discourse

Stay

Tuned

Our team is in the process of launching a Slack Channel for members in order to engage in live discourse with published content.

FAQ:

Q: Can we refer other members to ON_Discourse?

A: Absolutely! We encourage you to refer other members to ON_Discourse. Simply share the our application link to members you would like us to consider and be sure they include your name in the “Please include the name of a referring ON_ member” section.

Q: How and when can I access the ON_Discourse platform?

A: Accessing ON_Discourse is easy, and it officially opens on June 5th! Simply visit our website ondiscourse.com and login with the username and password you first created, and you will be able to access our exclusive content! Please note logins and passwords cannot be shared.

Q: Do you offer corporate pricing?

A: Yes, we do offer corporate pricing for companies interested in subscribing to ON_Discourse. Please reach out to dharika@ondiscourse.com to discuss your specific needs and we’ll be happy to provide you with more information.

Q: How often are new articles or content published on your platform?

A: We strive to provide fresh and engaging content regularly, however we strive for quality over quantity. New articles, newsletters, and podcasts are published multiple times per month, ensuring that you stay up to date with the latest trends and various perspectives in the tech industry.

Q: Can I contribute my own articles or content to the platform?

A: We welcome contributions from our community members. If you have a unique perspective or interesting content you’d like to share, we encourage you to reach out to our editorial team at anthony@ondiscourse.com with your ideas. We would be happy to review your submission and provide feedback. However, please note that while we appreciate your contributions, publication on our platform is not guaranteed. Our editorial team carefully selects content that aligns with our mission.

Meet

The Founders

Toby Daniels
Co-Founder, ON_Discourse
toby@ondiscourse.com
Dan Gardner
Chairman, Code and Theory, Co-Founder, ON_Discourse
Michael Treff
CEO, Code and Theory, Co-Founder, ON_Discourse
Larry Muller
COO, Code and Theory,
Co-Founder, ON_Discourse

The Team

Kelly Bourdet
Executive Editor, kelly@ondiscourse.com
Dharika Shah
Operations Manager, dharika@ondiscourse.com
Anthony DeRosa
Head of Content & Product, anthony@ondiscourse.com

Thank

You

Your registration was successful. You now have full access to ON_Discourse.