Is AI a Goldmine

or a Landmine

For Athlete


Toby Daniels
Toby Daniels
Co founder ON_Discourse

You’re a sports fan in 2028.

You log into your fitness app and a chat pops up — it’s an AI workout assistant that looks and talks like your favorite NBA star. You input your training goals for this week and get back a personalized diet and training plan based on your needs.

You pop on your VR headset and fire up Madden 2028 — you scroll through a roster of thousands of hyper-realistic AI-generated players from throughout time and create your team. You head onto the field.

Later, you turn on SportsCenter and watch as Stephen A. Smith debates an AI-generated young Charles Barkley. They’re arguing over who’s wearing the sharper suit, which you can buy using your Apple Cash directly from your headset.

AI – with its seemingly endless ability to create, analyze, and mimic – is transforming industries at a breakneck pace. Athletes are uniquely positioned to capitalize on this tech to reimagine sports branding. 

Athletes, bound by the constraints of time and resources, now have the potential to leverage their likeness and scale their brands in innovative ways to engage fans.

AI also poses considerable risk. It’s uncharted territory with pitfalls that range from unauthorized deepfakes to AI-generated communication that fans find inauthentic.

This new age might be a goldmine for athletes, but they’ll have to avoid the landmines first.

AI’s potential role in enhancing personal branding in sports cannot be overstated.

Already, AI-driven analytics can synthesize vast amounts of data about fans’ behaviors and preferences, allowing athletes to tailor their brands to better target individuals. 

Personalized fan experiences, from AI-curated content to virtual meet-and-greets, are poised to redefine the fan-athlete dynamic, creating a stronger and more direct connection. Generative AI models could be trained to replicate an athlete’s voice and tone. These models could then be used to create unique content in the style of the athlete, which could then be targeted at fans who are most interested in the topic. 

The economies of scale enabled by automation, including aspects of content creation, also enable athletes with smaller followings to boost their brands and reach more fans. This shifts the power dynamics within the sports industry, placing control back into the hands of the athletes.

But these tools are not without risk; there is also the potential for AI to erode the value of an athlete’s brand. Automated content, while efficient, lacks the genuine human touch and authenticity that fans often seek. The uniformity resulting from AI processes might also lead to diluted brand identity, reducing differentiation and competitive edge.

Equally, the potential for misuse is significant.

Deepfakes, AI-manipulated images or videos that often appear authentic, present a risk even today. As the technology improves – and the ease of creating convincing synthetic media rapidly increases – public figures will have to reckon with the potential consequences of false narratives being planted by fake versions of themselves. The current legal framework, predominantly designed for a pre-AI era, struggles to tackle these novel challenges.

Image rights, the linchpin of an athlete’s brand, face an unprecedented threat with the advent of AI. The ability of AI to create lifelike digital personas of athletes, and use them in a myriad of contexts, raises complex issues surrounding consent and ownership. An entirely new framework for licensing athlete likenesses – and for objecting to the use of unlicensed, AI-generated likenesses – is needed. 

The Threat to Brand Ownership and Authenticity

Need for Regulation

Contracts and the legal framework need to evolve to address the challenges posed by AI, protecting athletes’ image rights and preventing misuse. Transparency and ethical considerations must guide the deployment of AI in sports branding, ensuring it enhances rather than detracts from the athlete’s brand value.

The emerging age of AI offers a wealth of opportunity and a chance to redefine the athlete-fan relationship. In the delicate balance between scaling and protecting an athlete’s brand, AI represents both a goldmine and a landmine. As we chart this new terrain, the challenge lies in unlocking the promise of AI while safeguarding against its perils.

Are these fears going to stifle creativity, innovation, and commercial growth? Humans have proven themselves eager to jump at hyped, if unproven tech, that promises financial gain. Just look at crypto! Will athletes fall into the same trap and pivot too quickly to using AI to redefine their personal brands? Maybe? Probably. 

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

AI Goldmines and Landmines

An NBA Player Surveys the AI Opportunity

Spencer Dinwiddie
NBA Athlete and Entrepreneur

Recently I fell down a rabbit hole thinking about the advent potential of AI-generated avatars and conversational bots and how they might be able to help revolutionize fan interaction.

Before this technological era, a person couldn’t be in two places at once or appear to come back from the dead. But this technology — which seemed so novel and impossible to fathom — is now right at our fingertips.


Did Tom Brady Really Say That?

Tony Iliakostas
Adjunct professor of Entertainment Law and Intellectual Property at New York Law School

Athletes, bound by the constraints of time and resources, now have the potential to leverage their likeness and scale their brands in innovative ways to engage fan.

But AI also poses considerable risk. It’s uncharted territory with pitfalls that range from unauthorized deepfakes to AI-generated communication that fans find inauthentic.

Do You

Want to




Dan Gardner
Co-Founder & Exec Chair, Code and Theory
Co-Founder, ON_Discourse

When was the last time you felt like you had your mind changed?

When you engaged in a discussion with someone you respect and who has the experience to speak knowledgeably about a topic but had a completely different point of view than your own?

Imagine a space where you would find yourself thinking differently about the most important decisions you make.

As the founder and chairman of a creative and technology-driven consulting business with almost 2,000 employees, I think creating space to be challenged and to be able to challenge others’ perspectives is essential to business success.

So why a member’s media company?

We believe “discourse” has become a bad word in media. In practice, it’s been used as a form of trolling or to express negativity around someone’s opinion, instead of embracing curiosity about a different opinion and understanding that can help make more informed decisions. A trusted space to publish and engage in conversation with those who value discourse is essential to what business leaders need.

We believe that the world of business and technology today is filled with fake experts, often confusing the narrative and, ultimately, decision-making. At ON_Discourse, we bring together practitioners to share perspectives about the work they do day in and day out. They share an understanding that only comes from being on the frontlines of technology. And these people also possess the humility to admit what they know and don’t know. 

We believe a members-only media company relieves us of the problem of prioritizing the wrong KPIs that often plague modern media companies. We don’t want to be focused on the volume of content, clickbait, or a NASCAR-style approach to ads on a page. This economic model allows us to focus on the value of our mission.

ON_Discourse launch event “A Symphony of Disruption” at Fotografiska in New York City

We will provide content that makes you question and helps inform action. We avoid predictable platitudes, the 100th similar take that’s already been written about the latest hot topic, self-help, or motivational essays. 

Whenever we publish something or host a live discussion, we ask ourselves how the content will provide decision-makers, with busy schedules and mission-critical projects, with information that they can use to go directly into meetings and negotiations feeling informed and a step ahead.

These are

our Values

  • Curiosity to go Deeper: We know that very few things are worthy of your time. We publish high-quality, high-impact work that elevates public discourse and provides our readers with unique insights. Volume is not our focus, value and quality are.
  • Diversity of Perspective: We champion intellectual honesty and the courage to acknowledge our own limitations in the pursuit of knowledge. We help our community challenge conventional thinking.
  • Empathy to Opposing Ideas: It’s important that we’re not pandering to our members and are open and honest in our approach to the subjects we cover. We should operate without fear or favor in analyzing and criticizing the issues we address. There can often be a “circle the wagons” approach in the technology space that we should avoid being part of. 
  • Disagreement is Encouraged: In order to offer true value, we need to be free of pressure from members or potential partners. We should be respectful and thoughtful in our approach. Challenge ideas, not people specifically. We expect that from our contributors as well.

We will deliver value through exclusive access to our digital content and exciting and engaging in-person events and experiences. We will bring together the best minds from the top levels of business and create opportunities for real discourse.

That’s the mission of ON_Discourse.

We’re excited to begin this journey with you.

If you’re a member, you can explore our articles through our Discover page.
If you haven’t yet applied for membership you can do so here.

Sorry, Everybody Can’t Be a Director

Dan Gardner
Co-Founder & Exec Chair,
Code and Theory
Co-Founder, ON_Discourse
The Shift from Knowledge
Work to Direction Work
Toby Daniels
Toby Daniels
Founder, ON_Discourse, former Chief Innovation Officer, Adweek, Founder and Chair, Social Media Week


It’s 2027, but it looks like today. ANTHONY gets out of bed and sits in front of his COMPUTER–

This is no ordinary computer; there is no interface, no folders, no zoom calls, and no meetings needed. He’s staring at his one sole text field called ‘the prompt box.’ He says to himself –

“Get to work”

Anthony starts to submit directional commands.  Prompt after prompt. He gets a sense of fulfillment knowing the robot will do his every command. 

THEN the Camera pulls out of the window of the 


Moving further and further away, we reveal even MORE CONDO BUILDINGS. Inside, we see OTHER PEOPLE sitting in front of their computers with the prompt box.

The Camera flies back into


The camera flies all the way inside to a close-up of Anthony’s face. He turns to look into the next room. His wife PAM is sitting at her prompt box doing the exact same thing. It looks like she’s talking to herself.


Pam! Luckily we changed our majors in college back in 2023. We are now so prepared for this prompt box. Johnny and Michelle must be screwed!

Anthony commands his BARD and audio via voice application.


Hey Bard, play today’s interesting business stories.

We hear a familiar voice… It sounds like SNOOP DOGG.


New study says…

But we quickly realize that this is not Snoop, but rather a DEEP FAKE –


…millions of jobs were saved by the early predictions of jobs shifting from  “knowledge workers” to “direction workers”

Another deep fake voice chimes in. This time it’s Al Pacino –


Yes, It’s amazing how knowledge is not important anymore, but unfortunately, companies still can’t hire enough direction workers and it’s causing salaries to increase at a rapid pace

The rapid ascent of generative AI, automation, and technologies that boost creative output has caused speculation and fear that we’re on the precipice of a massive industry shift away from knowledge workers. The breakneck pace of change has young people wondering how to best prepare themselves for an unpredictable world – Should I study computer science? Is coding obsolete? – and employers grasping for how to hire for the skillsets of the future.

My answer is, Don’t be so dramatic.

The rate of advancement in generative AI is so extreme that we are all trying to understand real-time the implications and guess what the ripple effects might be. It’s given everyone in every single industry collective whiplash. 

And it’s resulted in over-the-top projections and calls for overcorrections, like a total shift in how we approach educating the future workforce and hiring for skills necessary for success. But both traditional knowledge workers and “direction workers”– those who direct and instruct the technologies of the future – will always be necessary. Creativity and success aren’t possible without both.

To clarify what I mean by “direction workers,” I’m referring to managers who not only lead teams but also primarily direct or instruct humans or technologies in various, often creative, outputs and tasks. If creative work is nearly wholly produced by generative AI, the story goes, those creative “doer” jobs will disappear in favor of jobs directing the tech.

Video Killed the Radio Star

The Buggles

But before we write off creativity and knowledge workers as superfluous, remember this is not our first industrial revolution — in fact, it’s one of many. So it’s essential to have some perspective on where we’ve been, where we are presently, and where we’re heading as a society. In every instance in the past where we saw dramatic changes in manufacturing, technology, and behaviors, one thing remained constant: passion, drive, creativity, and knowledge are the elements that drive innovation.

There are still thriving musical artists despite the advent of MTV. And music artists AND music videos are still thriving despite the advent of YouTube. Then there were still those artists, and record labels, despite the advent of MP3s, and so on, and so on.  Did job types reshuffle because of new processes,  business, and distribution models? Of course. But the industry didn’t vanish – it just evolved. Old jobs were gone, and new jobs were created.

Education is what remains after one has forgotten what one has learned in school.

Albert Einstein

It is not a new concept to suggest our educational system is outdated.  The emergence of AI may have shined a spotlight on this fact, but it sure isn’t the cause. We have basically had the same classroom teaching style for the last century, despite all that has changed around us.

The traditional education system has long emphasized memorization and rote learning, which is ineffective in a world where information is readily available at our fingertips. Students are often taught to regurgitate facts rather than develop critical thinking skills, problem-solving abilities, and adaptability — all of which are crucial no matter which discipline you choose. 

An updated education system should embrace interdisciplinary approaches, encouraging students to explore the intersections of different fields. This will enable them to connect the dots and develop a holistic understanding of complex issues, fostering innovation and adaptability regardless of where AI takes us. Hypothesizing on where AI is going in order to inform your educational choices today is ridiculous and should not be the point of education. 

A small reminder that if you’re 40-something years old or older, you somehow went through all of high school without a computer as a primary focus, and even in your college days, the computer offerings were likely rudimentary at best. And yet many have gone on to have long-standing careers focused on the web, mobile computing, and social media. How was that possible if they didn’t teach it in the ‘90s?

The greater danger for most of us lies not in setting our aim too high and falling short, but in setting our aim too low and achieving our mark.


Another issue with the push for AI and automation tech is that so many voices chiming in on this topic are overly fixated on the merits of staff reduction based on the output possible from artificial intelligence. Voices are asking, “Why does my business need ten people when I can have five people, or maybe better, just one person doing the job of the whole department?”

But if you and your competition are so intent on reduction, where does differentiation start? It doesn’t matter whether you majored in art or in engineering — we’re all aware that multiplication builds, while subtraction takes away. 

So if one company is multiplying and the other is subtracting, who may have the better outcome? What are companies adding to the discussion, the organization, and the culture at large, when they’re so focused on reduction? 

We are at a pivotal moment right now. That is undeniable. 

Everybody, across every level of an organization, is going to work with AI in the future, the same way the computer is essential today. Yes, some jobs will go away, while new ones are created. However, the types of careers that will emerge as a result of AI will differ from the ones that are being displaced. 

The Shift from Knowledge
Work to Direction Work
Toby Daniels
Toby Daniels
Founder, ON_Discourse, former Chief Innovation Officer, Adweek, Founder and Chair, Social Media Week

Nevertheless, occupations that rely on human skills such as problem-solving, creativity, and empathy are less susceptible to being replaced by machines in the immediate future. As AI continues to advance, these roles will also probably experience some impact. The encouraging aspect is that AI will augment, and potentially multiply the value and output of,  these occupations, propelling problem-solving, creativity, and empathy to unprecedented levels and generating fresh opportunities like never before.

If you take the implication of direction work to its most extreme conclusion, you can imagine the story above playing out where tens of millions of white-collar workers are all sitting in front of their computers, much like today. And, as the story goes, with these computers having just that one field. And the direction worker’s job is just to keep telling some artificial intelligence what to do: And now, computer, write this. Now, computer, create this – on and on. That’s a preposterous idea because there’s simply not enough direction that will be needed for that to happen. 

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know



is a



Anthony DeRosa
Head of Content and Product,

Recent warnings from tech leaders that AI could lead to the extinction of humanity have raised eyebrows and reminded me of rhetoric that supports the US military-industrial complex. The apocalyptic scenarios put forth by AI industry titans like Sam Altman may be giving birth to an “AI-industrial complex,” driven by hyperbole rather than a rational evaluation of AI’s potential destructive power.

A Familiar Narrative

The parallels between the AI extinction talk and the fear-mongering that powers the military-industrial complex are striking. The military-industrial complex refers to the close relationship between the defense industry, the military, and the government. It suggests that these entities have a vested interest in perpetuating a state of war or the fear of external threats in order to maintain their power and profitability. By exaggerating or fabricating dangers, those in the military or defense industry can justify increased military spending, weapon development, and the expansion of military influence.

Similarly, fear-mongering about the risks of AI may amplify or embellish the potential danger of the technology. While it is important to acknowledge and address the ethical and safety concerns surrounding AI, over-the-top speculation can lead to exaggerated narratives that overshadow its potential benefits and hinder progress in the field. It can also shape public opinion and policy decisions, potentially resulting in restrictive regulations or unnecessary limitations on AI development.

The Emergence of an “AI-Industrial Complex”

The concept of an AI-industrial complex refers to the amalgamation of influential entities, including corporations, government agencies, and media outlets, that profit from fear and exaggeration surrounding AI’s potential dangers. This complex capitalizes on the public’s fascination with doomsday scenarios and, ironically, fuels the demand for AI-related products, services, and research by depicting these technologies as all-powerful.

The calls for regulation in the face of this supposedly extinction-level threat are hollow. Regulatory oversight often favors incumbents who can leverage their money and power to push for oversight that is more favorable to them, so it makes sense that many giants in the AI space are now calling for legislators to take a closer look. 

It’s imperative to question the motivations behind such rhetoric and consider whether it serves the best interests of society or simply acts as a vehicle for self-interest.

Fostering a Balanced Dialogue

The dangers of AI should not be disregarded, as responsible discussions around ethics, privacy, job displacement, and algorithmic bias are crucial. However, it is equally important to maintain a balanced dialogue that separates legitimate concerns from alarmist speculation. Painting all AI advancements with a broad brush of impending doom stifles innovation and instills unnecessary fear in the public.

To avoid falling into the trap of an
AI-industrial complex, we must encourage critical thinking, evidence-based analysis, and multidisciplinary collaborations. Thought leaders, policymakers, and the media should prioritize objective assessments of AI’s risks and benefits.

Drawing parallels between the AI extinction talk and the military-industrial complex should serve as a reminder to exercise caution and skepticism in the face of hyperbolic scenarios. In both cases, there is a potential for vested interests to exploit and manipulate public fear for their own gain. The military-industrial complex thrives on the perpetuation of fear to maintain its influence, while fear-mongering about AI risks can serve the interests of individuals or organizations seeking to control or shape the development of AI technologies and their regulation.

By drawing this parallel, we can recognize the potential for fear-based narratives to shape public opinion and policy decisions in both the military-industrial complex and the AI domain. It highlights the importance of critical thinking, transparency, and ethical considerations in navigating these complex issues.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

Michael Nuñez
Editorial director of VentureBeat
Disclosing the use of AI in reporting:
It’s Futile

Across newsrooms, journalists are grappling with an unstoppable forcegenerative AI tools that are quickly reshaping how stories are reported, written, and edited.

In an editorial memo to our staff, I recently shared my vision for using generative AI as a creative partner, not as a replacement for human judgment. But much is at stake amid this enormous swell of innovation.

Tools like ChatGPT, Bing Chat, and Google Bard represent a massive paradigm shift for our profession. These large language models, trained on humanity’s cumulative knowledge, have the potential to dramatically expand journalists’ storytelling abilities and productivity.

But as we embrace these innovations, we also face a dilemma: How do we balance the benefits of AI with the core values of journalism? How do we ensure that our stories are accurate, fair, and ethical, while also taking advantage of the speed and efficiency of AI? How do we preserve the human element of journalism—the empathy and judgment that make our stories meaningful and trustworthy—while also leveraging the power and potential of AI?

These are some of the questions that we at VentureBeat are wrestling with every single day. As one of the leading authorities in AI news coverage, we are at the forefront of exploring and experimenting with generative AI technologies in journalism. We are also committed to sharing our insights and best practices with our peers and our audience.

At VentureBeat, we’ve created several guardrails for the use of AI in the newsroom. We allow our editors and reporters to use tools like ChatGPT, Claude, and Bing Chat to suggest article topics, story angles, and facts — but journalists are required to verify all information through traditional research. We do not publish articles solely written by AI models.

We’ve found that artificial intelligence can be regularly used in a few mission critical ways:

  • Story ideation: We often use tools like ChatGPT, Claude, and Bing Chat to brainstorm potential topics and angles for our stories, based on our areas of coverage and current trends. We also ask these tools to generate summaries, outlines, or questions for our stories, which help us structure our research and writing.
  • Headlines: We use tools like ChatGPT, Claude, and Bing Chat to generate attention-grabbing and informative headlines for our stories, based on the main takeaways and keywords. We also ask language models to suggest alternative headlines or variations, which help us optimize our headlines for SEO and social media.
  • Copyediting: We use tools like ChatGPT, Claude, and Bing Chat to proofread and edit first drafts of our articles, checking for grammar, spelling, style, and tone. We ask language models to rewrite sentences, paragraphs, or sections of our articles, to improve clarity, coherence, or creativity. Our human editors always review, edit and fact-check VentureBeat stories before publishing.
  • Research: We use AI to assist us with our research, by providing relevant facts, sources, quotes, or data for our stories. We also ask these tools to summarize or analyze information from various sources, such as web pages, reports, or social media.

By now, you might be wondering how we handle disclosures.

At VentureBeat, our policy is to disclose the use of AI only when it is relevant to the story or the reader’s understanding. Otherwise, we treat AI as any other tool that we use in our daily work, such as Microsoft Word, Google Search, or the grammar and word-choice suggestions made by Grammarly. We do not believe that singling out one tool over another adds any value for our readers. What matters are the core values of accuracy, objectivity, and ethical use of information that guide our craft — values that we uphold rigorously regardless of the technologies involved.

There is no one-size-fits-all solution for how to use AI in journalism. Each news organization has its own mission, vision, and standards that will guide its editorial decisions; each journalist has his or her own style, voice, and perspective that inform their storytelling; and each story has its own context, purpose, and audience that determine its format and tone.

But there are some principles that have been foundational to great journalism for centuries, and that will continue to be relevant in the age of AI. These principles can help us strike a balance between innovation and tradition and between automation and humanization.

Newsrooms Should Build
Their Own Generative AI
Matthieu Mingasson
Head of Design Transformation at Code and Theory

One principle is always reporting the truth. Our journalistic standards and values must be enhanced, not compromised by AI. 

That means that our stories must be factual, fair, and balanced. Any data used in a story — whether it’s surfaced by Google Search, GPT-4, or any other software available— must be verifiable. 

The use of large language models does raise some thorny questions about the data that powers the AI tools we’re using. As a journalist, I would, of course, like to know how the models are trained, what they can and cannot do, and what risks they pose.

We believe greater transparency into training datasets will be important as we forge ahead and confront this massive shift in the industry. Transparency in the datasets used to power LLMs will help us to understand how the models work, what assumptions they make, and what limitations they have. It will also help us monitor and audit the performance of models, to identify and correct any errors or biases, and ensure accountability and trust. We believe transparency in LLM research will be one of the defining issues of the year — and have covered it at length.

Another defining principle of how we use AI in the newsroom is accuracy.

We need to get the facts right. Without rigorously fact checking every detail, misinformation spreads. That means being careful and ethical about how and why we use AI in our reporting. Again, we do not rely on AI blindly or uncritically, but rather use it as a tool to augment our human skills and judgment.

For example, we recently wrote a story about how Amazon inadvertently revealed its plans to create an AI-powered chatbot for its online store. When we used ChatGPT to generate headlines for our story, it suggested inaccurate options that confused ChatGPT with a “hallucinated” (i.e. fabricated) conversational model that Amazon is trying to build in-house. It was just one of many examples we see on a daily basis that illustrates the need for human oversight and fact-checking when using such tools. Our final headline was written by a human: Amazon job listings hint at ChatGPT-like conversational AI for its online store.

There are many other instances where reliance on AI should be limited or avoided, such as in the interpretation of complex, nuanced information. While AI-generated summaries can help journalists quickly assess vast amounts of data, the technology may inadvertently omit critical details or misrepresent the original intent of the source material. In such cases, an experienced editor or reporter is necessary. We insist that all of our reporters review the source material for any story they write. We view this as part of the rigorous fact-checking required in modern-day journalism.

Our goal will never be to supplant or diminish our reporters or editors using AI — they’re the ones who make the stories happen. They are the heart and soul of our work. They go deep, challenge the status quo, and expose the truth. They give our readers the facts they need in order to make smarter decisions. Our journalists’ insight, perspective, and analysis is what makes them indispensable. We need them to use their judgment and bring that humanity to each of their stories in order to make us a leading source of news.

Finally, we are committed to accountability.

While AI can produce content at a massive scale, only humans can ensure quality. Articles produced by artificial intelligence are a draft, not finished work. Consumers rightly expect the media to get the details correct. We believe we will drive traffic and business with trust, not shortcuts. 

We treat text generated by AI as we have always treated news copy produced by humans. It is subject to rigorous editing and fact-checking before publication. We make corrections promptly and transparently when errors occur. If a story warrants disclosure of our methods to provide proper context, we will do so to maintain transparency. But the default is that we do not disclose the use of AI any more than we disclose the brand of word processing software upon which a story is written. What matters is the end result — a fair, unbiased, and truthful report. While others get lost in “AI-assisted” details, we continue to focus on breaking news.

While generative AI is an unstoppable force in many industries, how it transforms journalism over the next decade is up to those of us at the forefront — not legacy media, which will surely be slow to adopt. Those who embrace artificial intelligence wisely by supplementing reporters, not replacing them, will unlock new possibilities. Those who fall prey to AI’s hype risk dehumanizing and degrading their newsrooms.

The future of journalism lies in human empathy. Generative AI promises a productivity revolution, but journalists must steer that transformation in service of our profession’s higher purpose: delivering truthful, impactful stories that serve the public interest.

Our role has never been more crucial amid a flood of misinformation. Now we have an opportunity to shape AI to enhance, not erode, what makes journalism essential through this era of disruption. Doing so will require constant vigilance, ethical innovation, and a commitment to the values that undergird this work. The balance we strike will reverberate for decades. History watches our next move.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know