July 19, 2023

Generative AI has exploded onto the scene, raising questions about its role in the creation and spread of misinformation.

Generative AI is a broad term that describes when computers create new content — such as text, photos, videos, music, code, audio and art — by identifying patterns in existing data. The most popular platforms include ChatGPT, Dall-E and Midjourney.

We’ve heard and read about the resulting fake college essays, misleading photos of prominent people and even supposed proof of events that never happened. In May, users circulated a photo they said showed an explosion near the Pentagon, claiming the United States was under attack. Though the story was a hoax that relied on an AI-generated photo, the fallout was real. The photo was shared by social media accounts with large followings, and the stock market briefly dipped.

There are other ways generative AI can cause harm when misused, some that have made headlines — such as creating political ads — and others that have received less attention, such as self-diagnosing medical issues.

In any form, its misuse can erode trust in institutions and civic processes like voting, experts said.

“I think that uncalibrated trust is an issue,” said Chenhao Tan, University of Chicago assistant professor of computer science. “People can over rely on AI without proper understanding of what they can or can’t do with AI, or without knowing how to check their results, or giving up their human agency.”

Here are three scenarios to know about when AI can mislead or cause harm.

Using AI to self-diagnose medical issues

Using online tools to assess symptoms isn’t a new phenomenon, and it was exacerbated during the COVID-19 pandemic, when people were encouraged to self-assess and self-triage.

But with tools like ChatGPT, which is a chatbot, people can ask more targeted questions, said Jason Fries, a Stanford University research scientist. One downside is they might not know how to interpret the results they receive.

“A lot of people are not super AI-savvy yet, and they haven’t really been primed to think skeptically about what a model is spitting out,” said Fries. “So if you sit down in front of ChatGPT and ask questions, there are people that are really surprised that it can just invent information.” Fries called it a “real danger mode” to not understand that AI can generate untruthful information.

Medical care also involves more than just communication of information or diagnostic labels, said Maha Farhat, Harvard Medical School professor of biomedical informatics. The information also needs to be contextualized, she said.

In some cases, though, the interactive nature of AI may help patients develop a better understanding of a diagnosis, Farhat said.

A research assistant and two professors from Harvard tested ChatGPT using 45 clinical vignettes that ranged in severity, and which they previously had tested with online symptom checkers. ChatGPT “listed the correct diagnosis within the top three options in 39 of the 45 vignettes,” which was 87% of the time, compared with 51% of the time for symptom checkers.

But the researchers noted several caveats to the results: The vignettes it used are the kind typically used to test medical students, “which may not reflect how the average person might describe their symptoms in the real world,” and “ChatGPT’s results are sensitive to how information is presented and what questions are being asked.” They concluded that “more rigorous testing is needed.”

Farhat agreed, saying the reliability of ChatGPT is not well-measured against the “gold standard” of speaking with a medical professional. Patients can look for information and learn about their symptoms through such tools asChatGPT, she said, but they should not self-medicate or rely on the diagnosis without medical consultation.

“No medical decisions should be made based on (generative AI) in its current state without additional evidence,” she said.

Farhat said using generative AI for self-diagnosis could lead to misdiagnosis and delays in seeking care, which could worsen health problems.

2024 elections: AI fakes can make candidates look ‘nefarious’

AI is already changing the campaign landscape in the leadup to the 2024 elections. Politicians such as GOP presidential candidate and Florida Gov. Ron DeSantis, as well as the Republican National Committee, have released AI-generated images and videos.

In June, DeSantis’ campaign included in a video three images of former President Donald Trump embracing Dr. Anthony Fauci that appeared to be genuine but had been generated by artificial intelligence.

Deepfakes — machine-generated images or videos that change faces, bodies or voices, making people appear to do and say things that they never did or said — already circulate regularly on social media, but the recent rollout of more advanced generative AI tools means it’s easier for people to create them.

Darrell West, senior fellow of the Center for Technology Innovation at the Brookings Institution, a Washington, D.C., think tank, said he expects to see the creation of more AI-generated videos and audio that makes candidates look “nefarious.”

The danger, he said, is that “voters may take such claims at face value and make their voting decision based on that information.”

West added, “In a close race, anything that moves a few thousand votes could be decisive.”

How should voters prepare? Rely upon multiple media sources, West advises. And “if something sounds beyond the pale, voters should examine the source and see if it is a credible source of information.”

Eroding ‘trust in the information environment’

Mekela Panditharatne, counsel for the democracy program at the Brennan Center for Justice at New York University School of Law, said generative AI could increase the scale of content aimed at preventing or deterring people from voting. AI makes it possible to automate the creation of convincing, false information about how, when and where to vote, she said.

In 2016, for example, Russia conducted an influence campaign to interfere with the U.S. presidential election. Today, using AI, a similar effort could be executed with fewer resources.

Panditharatne said generative AI platforms draw from existing online data, which contain false claims about the integrity of the 2020 election, mail voting and drop boxes and widespread voter fraud. She said there are concerns that generative AI tools could be exploited to amplify mis- and disinformation online that “seeks to undercut faith in election processes.”

“We’re still seeing exactly how AI may impact the elections space, but the presence and the potential for the proliferation of AI-generated content could possibly decrease trust in the information environment overall and make it harder for voters to distinguish between what’s true and false,” she said.

Russia’s propaganda model relies on a “fog of confusion” that makes it hard to tell truth from falsehoods, an article co-written by Panditharatne stated in June. It could make voters lose trust in accurate and authoritative sources of election information, the article said.

Experts say correcting information isn’t easy once people have seen it.

“People (don’t) work like a computer. It’s not like, flip a switch, and showing you that the situation is false, then that information is actually false,” said Tan, of the University of Chicago. “The exposure to the initial misinformation is hard to overcome once it happens.”

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Loreben Tuquero is a reporter covering misinformation for PolitiFact. She previously worked as a researcher/writer for Rappler, where she wrote fact checks and stories on…
Loreben Tuquero

More News

Back to News