Google is working on AI tools to help journalists write news stories
Google LLC is reportedly testing an artificial intelligence tool designed to write news stories, headlines and assist journalists in their day-to-day work.
The service, dubbed Genesis, purportedly uses generative AI technology to gather news – such as details about current events – and then generate news content, according to a report from the New York Times.
Google is currently pitching the product to news outlets, including executives at The New York Times, The Washington Post and The Wall Street Journal’s owner, News Corp., said three people familiar with the matter.
Some executives described in the New York Times report, who were not identified, said that the pitch felt “unsettling,” and two said that it seemed to “take for granted” the journalistic effort needed to craft stories that convey accurate news with a personal touch to audiences.
In a statement published on Twitter, Google said that the intention of the new AI tool was not to replace the work of journalists but to act as an assistant similar to tools that the company has been building into its other products. “For instance, AI-enabled tools could assist journalists with options for headlines or different writing styles,” the company said. “Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity.”
Generative AI, the technology behind popular chatbots such as OpenAI LP’s ChatGPT and Google’s Bard, which can understand natural language and respond conversationally to do research based on large amounts of textual data, has become extremely popular of late for the ability to write letters, long-form essays and even fiction stories. The chatbots are even capable of producing research from data taken from the web and up-to-date news, including current events, and use that to produce an article to assist a journalist when writing a story.
However, generative AI chatbots are not without their flaws. When producing results, they will confidently produce factual errors in a form of confabulation called “hallucinations” that even Google Bard fell prey to during its debut when it was asked a question about the James Webb Space Telescope and got the answer wrong.
Gizmodo also learned the hard way recently that using an AI bot to write articles can backfire when what should have been an easy story called “A Chronological List of Star Wars Movies & TV Shows” turned out to be riddled with factual errors and writing flaws.
It’s clear that any news writing would require deep participation with a human partner, who would have to fact-check the end product if the AI. These issues could only become more problematic when it comes to current, or emerging, events where the AI is working from web sources. The likelihood that AI may produce errors or hallucinations increases when it isn’t able to find the information it’s looking for.
“Quite simply these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating, and fact-checking their articles,” Google said.
This news comes shortly after the Associated Press announced a partnership with OpenAI LP that will also explore possible applications of generative AI in the news industry. The objective of AP’s partnership will be to give OpenAI access to its trove of historical news stories to train its AI technology in order to make refine its systems and make it more capable of producing better answers and more genuine conversation.
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.