Google’s Genesis AI Tool Could Write the News. It Should Be Stopped – CNET

As Google CEO Sundar Pichai opened the tech giant’s annual I/O developer conference in May, the phrase “Making AI helpful for everyone” was emblazoned on the crisp white screen behind him. Pichai noted this idea was the most profound way to advance the company’s mission of organizing the world’s information.

The world’s information, for the most part, flows through the company’s practically unstoppable search engine and its myriad apps and tools. As Pichai noted at I/O, 2 billion people are using six of Google’s core apps. A good portion of all that information — the news — is written and delivered by tens of thousands of journalists and writers and content creators, from globe-spanning publications to small startups to freelancers pinch-hitting across magazines and news cadets honing their craft at local papers. 

Google believes it can help them by introducing AI to the mix. But AI isn’t what journalists need. And the search giant’s foray into newsrooms should concern readers, too.

It’s developing a tool, code-named Genesis, that “can take in information — details of current events, for example — and generate news content,” according to anonymous sources cited by The New York Times. It has approached organizations like the Times, The Washington Post and News Corp, which owns The Wall Street Journal. It’s unclear whether Google is pitching it to be used in news-gathering or is looking to collaborate on development. 

Google didn’t respond to requests for comment for this article. 

A Google spokesperson told CNET on July 20 that “these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles,” but the very description of the tool by the Times suggests the opposite. Publishing executives who’ve seen Google’s pitch for Genesis described it as “unsettling,” according to the Times. The tool can reportedly automate some tasks, providing “options for headlines or different writing styles,” according to the spokesperson.

[embedded content]

Just over seven months ago, I wrote that ChatGPT wouldn’t be coming for journalism jobs (at least in the near future) because it simply can’t do what a journalist does. OpenAI’s flagship AI app is a word organizer, not a truth collector or imaginative story teller. It can’t go out and report from a crime scene or interview a doctor, a schoolteacher or anyone else. It also isn’t trained on up-to-the-minute data. Though I was specifically discussing ChatGPT’s abilities as a journalist, the argument could broadly be applied to large language models and generative AI at the end of 2022. Their deficiencies were too numerous and their hallucinations too common to present a real threat, I thought. 

That was then. Now I’m not so sure.

Not because I think ChatGPT, large language models or generative AI have gained those capabilities and can adequately do a journalist’s job — they can’t. But this doesn’t seem to matter. The tech giants have gone ahead and manufactured the tools, anyway. They may not be designed to replace journalists, but (what little information we have of) their capabilities suggest they potentially could do just that. 

Watching Trinity

Maybe it’s because I’m exhausted by the jamming of AI-sized-pegs into a human-sized-holes or just because I’m fresh out of watching Oppenheimer (it’s definitely the latter), but the lasting consequences of developing a tool like Genesis feel like they’re bigger than anything we’ve seen from generative AI so far, with resounding repercussions for the people who will be most affected by the misuse or abuse of that tool: you — the reader.

The Oppenheimer analogy is hauntingly apt here. When scientists learned how to split the atom, it was immediately apparent to them that the reaction could help build a devastating atomic bomb. The first test of such a weapon, Trinity, was conducted in the New Mexico desert on July 16, 1945. The bomb detonated; it worked. Not even a month later, two atomic bombs had been dropped on the Japanese cities of Hiroshima and Nagasaki. I don’t wish to diminish the horrors of the A-bomb or equate the power of generative AI with those tragedies. I only want to highlight how quickly we can move from theory to practice, with little understanding of the long-term consequences. It’s alarming.

Genesis, as it’s currently understood, can’t generate news. It takes one element of the journalistic endeavor — writing — and makes it appear as if it’s the whole damn show. It isn’t. It does a disservice to journalists of all stripes to even suggest this and should concern readers who understand that important stories are more than words placed in sequence. Journalism is sourcing, verifying, fact-checking, spending hours on phones, years in documents. 

Leave a Reply