June 21, 2023

Artificial intelligence makes it possible to create digital replicas of famous people’s voices and style, allowing audiences to hear beloved voices from the past and interact with them in new ways. However, it also raises ethical questions about consent and the use of someone’s voice without their permission.

It’s not unusual for technology trends to outpace the legal frameworks meant to regulate them. So it should be no surprise that along with new functionality and increased productivity the rise of AI will raise legal issues around copyright, fair use and more.

These issues are already happening with synthetic media in the art/image creation space with tools like MidJourney and DALL-E. The ability to create AI art inspired directly by Monet, Rembrandt and other artists whose work is in the public domain isn’t up for legal debate, but that’s entirely different when a work of art takes cues from living and working artists.

While most of the current controversy around large language model chatbots like ChatGPT focuses on factual accuracy, the issues happening in image creation will find their way into generative text creation, too. It’s amazing that we can chat with a character based on all the writings of Voltaire or Mark Twain, but this becomes ethically dubious when we use generative AI to approximate more contemporary figures.

Walter Cronkite, for instance, was once considered America’s most trusted voice. In an era in which trust is low, would CBS have a legal right to recreate Cronkite’s voice and speech pattern based on content produced under their brand? There’s a clear benefit to preserving cultural heritage and learning about history through the voices of its long-gone figures, but replicating the voice of a deceased person raises questions about commercial exploitation and appropriation. Like it or not, this is an issue the industry will have to grapple with.

In October 2022, just before ChatGPT launched, I wrote for Poynter that edits could be made to first drafts of articles by suggesting styles changes, à la, “Edit this piece to have a politically neutral tone, in the style of Hunter S. Thompson, and take the anecdote from the sixth paragraph and use it as a kicker.” Replace “Hunter S. Thompson” with a brand name like “The New York Times” and suddenly a “living” corporation would defend its “trademark,” to the extent that the company can argue in court that somebody used “a New York Times tone.” When it comes to individuals like Barbara Walters, who will stand up to ensure their style isn’t taken without fair recompense?

According to a report by the International News Media Association called “News Media and the Dawn of Generative AI,” synthetic media is not currently subject to copyrights. An AI prompt isn’t subject to copyright any more than a clever Google search query. However, the doctrine of fair use is going to be put to the test. “Still, there is no doubt Fair Use as a doctrine is being given, at the very least, a good run for its money. Good IP stewardship for your organization points to keeping up with legal news and best practices to both protect your work and yourself in the use of AI-generated work,” the report says.

Some work from writers like Ida B. Wells and Mark Twain may be in the public domain, but what if CNN wants to leverage Anderson Cooper long after he’s retired? Could the network option his intellectual property like Disney can with its cartoon characters?

And that’s just the legal concerns. Let’s not forget there’s an audience to consider. What would their understanding of such content be? Would their ability to digest partisan news increase if such news was delivered by a “trusted voice.” Or would the artifice of it all shine through and destroy the messenger?

None of this is fantasy. It’s happening already. Actor Edward Herrmann, who passed away in 2014, is still the voice of several recent audiobooks. While this presents an opportunity for fans to hear a familiar voice and companies to leverage talent for longer, it also raises ethical questions about the use of a deceased person’s voice without their permission.

News brands are already exploring safer legal territory in which they are essentially being personified via chat. Two examples are Bloomberg’s “BloombergGPT” and Skift’s “AskSkift.” These bots can answer questions around finance or travel — trained on the data from their respective organizations —  in the voices of those brands. It’s conceivable that a news brand could even license out these “brand voices” to other publishers.

Ultimately, artificial intelligence and unique styles, whether personal or organizational, are going to intersect in unforeseen ways. Just as movies will be remade with everyone cast as, say, Arnold Schwarzenegger, news and information will be shared in the voice and tone that the publisher thinks will be the most effective at increasing reach and conveying meaning.

While we may not yet know how fair use and copyright laws will apply to specific instances in which AI-generated content imitates particular writers or styles, there is no doubt that these emerging technologies will continue sparking fascinating discussions surrounding intellectual property rights within media landscapes moving forward.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Over the last fifteen years David Cohn has been at the forefront of innovation in journalism, working on some of the first experiments in buzzwordy…
David Cohn

More News

Back to News