Across newsrooms, journalists are grappling with an unstoppable force — generative AI tools that are quickly reshaping how stories are reported, written, and edited.
In an editorial memo to our staff, I recently shared my vision for using generative AI as a creative partner, not as a replacement for human judgment. But much is at stake amid this enormous swell of innovation.
Tools like ChatGPT, Bing Chat, and Google Bard represent a massive paradigm shift for our profession. These large language models, trained on humanity’s cumulative knowledge, have the potential to dramatically expand journalists’ storytelling abilities and productivity.
But as we embrace these innovations, we also face a dilemma: How do we balance the benefits of AI with the core values of journalism? How do we ensure that our stories are accurate, fair, and ethical, while also taking advantage of the speed and efficiency of AI? How do we preserve the human element of journalism—the empathy and judgment that make our stories meaningful and trustworthy—while also leveraging the power and potential of AI?
These are some of the questions that we at VentureBeat are wrestling with every single day. As one of the leading authorities in AI news coverage, we are at the forefront of exploring and experimenting with generative AI technologies in journalism. We are also committed to sharing our insights and best practices with our peers and our audience.
At VentureBeat, we’ve created several guardrails for the use of AI in the newsroom. We allow our editors and reporters to use tools like ChatGPT, Claude, and Bing Chat to suggest article topics, story angles, and facts — but journalists are required to verify all information through traditional research. We do not publish articles solely written by AI models.
We’ve found that artificial intelligence can be regularly used in a few mission critical ways:
- Story ideation: We often use tools like ChatGPT, Claude, and Bing Chat to brainstorm potential topics and angles for our stories, based on our areas of coverage and current trends. We also ask these tools to generate summaries, outlines, or questions for our stories, which help us structure our research and writing.
- Headlines: We use tools like ChatGPT, Claude, and Bing Chat to generate attention-grabbing and informative headlines for our stories, based on the main takeaways and keywords. We also ask language models to suggest alternative headlines or variations, which help us optimize our headlines for SEO and social media.
- Copyediting: We use tools like ChatGPT, Claude, and Bing Chat to proofread and edit first drafts of our articles, checking for grammar, spelling, style, and tone. We ask language models to rewrite sentences, paragraphs, or sections of our articles, to improve clarity, coherence, or creativity. Our human editors always review, edit and fact-check VentureBeat stories before publishing.
- Research: We use AI to assist us with our research, by providing relevant facts, sources, quotes, or data for our stories. We also ask these tools to summarize or analyze information from various sources, such as web pages, reports, or social media.
By now, you might be wondering how we handle disclosures.
At VentureBeat, our policy is to disclose the use of AI only when it is relevant to the story or the reader’s understanding. Otherwise, we treat AI as any other tool that we use in our daily work, such as Microsoft Word, Google Search, or the grammar and word-choice suggestions made by Grammarly. We do not believe that singling out one tool over another adds any value for our readers. What matters are the core values of accuracy, objectivity, and ethical use of information that guide our craft — values that we uphold rigorously regardless of the technologies involved.
There is no one-size-fits-all solution for how to use AI in journalism. Each news organization has its own mission, vision, and standards that will guide its editorial decisions; each journalist has his or her own style, voice, and perspective that inform their storytelling; and each story has its own context, purpose, and audience that determine its format and tone.
But there are some principles that have been foundational to great journalism for centuries, and that will continue to be relevant in the age of AI. These principles can help us strike a balance between innovation and tradition and between automation and humanization.
One principle is always reporting the truth. Our journalistic standards and values must be enhanced, not compromised by AI.
That means that our stories must be factual, fair, and balanced. Any data used in a story — whether it’s surfaced by Google Search, GPT-4, or any other software available— must be verifiable.
The use of large language models does raise some thorny questions about the data that powers the AI tools we’re using. As a journalist, I would, of course, like to know how the models are trained, what they can and cannot do, and what risks they pose.
We believe greater transparency into training datasets will be important as we forge ahead and confront this massive shift in the industry. Transparency in the datasets used to power LLMs will help us to understand how the models work, what assumptions they make, and what limitations they have. It will also help us monitor and audit the performance of models, to identify and correct any errors or biases, and ensure accountability and trust. We believe transparency in LLM research will be one of the defining issues of the year — and have covered it at length.
Another defining principle of how we use AI in the newsroom is accuracy.
We need to get the facts right. Without rigorously fact checking every detail, misinformation spreads. That means being careful and ethical about how and why we use AI in our reporting. Again, we do not rely on AI blindly or uncritically, but rather use it as a tool to augment our human skills and judgment.
For example, we recently wrote a story about how Amazon inadvertently revealed its plans to create an AI-powered chatbot for its online store. When we used ChatGPT to generate headlines for our story, it suggested inaccurate options that confused ChatGPT with a “hallucinated” (i.e. fabricated) conversational model that Amazon is trying to build in-house. It was just one of many examples we see on a daily basis that illustrates the need for human oversight and fact-checking when using such tools. Our final headline was written by a human: Amazon job listings hint at ChatGPT-like conversational AI for its online store.
There are many other instances where reliance on AI should be limited or avoided, such as in the interpretation of complex, nuanced information. While AI-generated summaries can help journalists quickly assess vast amounts of data, the technology may inadvertently omit critical details or misrepresent the original intent of the source material. In such cases, an experienced editor or reporter is necessary. We insist that all of our reporters review the source material for any story they write. We view this as part of the rigorous fact-checking required in modern-day journalism.
Our goal will never be to supplant or diminish our reporters or editors using AI — they’re the ones who make the stories happen. They are the heart and soul of our work. They go deep, challenge the status quo, and expose the truth. They give our readers the facts they need in order to make smarter decisions. Our journalists’ insight, perspective, and analysis is what makes them indispensable. We need them to use their judgment and bring that humanity to each of their stories in order to make us a leading source of news.
Finally, we are committed to accountability.
While AI can produce content at a massive scale, only humans can ensure quality. Articles produced by artificial intelligence are a draft, not finished work. Consumers rightly expect the media to get the details correct. We believe we will drive traffic and business with trust, not shortcuts.
We treat text generated by AI as we have always treated news copy produced by humans. It is subject to rigorous editing and fact-checking before publication. We make corrections promptly and transparently when errors occur. If a story warrants disclosure of our methods to provide proper context, we will do so to maintain transparency. But the default is that we do not disclose the use of AI any more than we disclose the brand of word processing software upon which a story is written. What matters is the end result — a fair, unbiased, and truthful report. While others get lost in “AI-assisted” details, we continue to focus on breaking news.
While generative AI is an unstoppable force in many industries, how it transforms journalism over the next decade is up to those of us at the forefront — not legacy media, which will surely be slow to adopt. Those who embrace artificial intelligence wisely by supplementing reporters, not replacing them, will unlock new possibilities. Those who fall prey to AI’s hype risk dehumanizing and degrading their newsrooms.
The future of journalism lies in human empathy. Generative AI promises a productivity revolution, but journalists must steer that transformation in service of our profession’s higher purpose: delivering truthful, impactful stories that serve the public interest.
Our role has never been more crucial amid a flood of misinformation. Now we have an opportunity to shape AI to enhance, not erode, what makes journalism essential through this era of disruption. Doing so will require constant vigilance, ethical innovation, and a commitment to the values that undergird this work. The balance we strike will reverberate for decades. History watches our next move.
Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know