Listen to the article
As artificial intelligence rapidly evolves, fact-checkers are deploying increasingly sophisticated tools to combat the spread of AI-generated misinformation, a growing concern that threatens to deepen social divisions and undermine public trust in information.
At the Poynter Institute in St. Petersburg, Florida, journalists at PolitiFact are on the frontlines of this battle. Editor-in-chief Katie Sanders and staff writer Loreben Tuquero specialize in identifying and debunking false information circulating across social media platforms.
“I cover artificial intelligence and misinformation and how those two interact and intersect,” explained Tuquero during an interview with Tampa Bay 28 reporter Michael Paluska as part of News Literacy Week. “So I start with just looking at what’s going viral on social media.”
Recent incidents highlight the scale and scope of the problem. Following events in Venezuela and ICE raids in Minnesota that captured public attention, AI-generated content quickly flooded social media feeds, confusing users and spreading false narratives.
In one notable case, social media users turned to AI chatbots asking them to “unmask” a federal agent involved in the Minnesota incident. “But an AI bot is not capable of that, and any output it’ll produce is fictional,” Tuquero noted. The AI created a completely fabricated face that users then wrongly associated with a real person who had no connection to the events.
The reach of such misinformation is staggering. One post about the federal agent garnered 73,000 shares and 164,000 likes, demonstrating the viral nature of emotionally charged AI-generated content.
The verification process employed by fact-checkers involves multiple layers of digital forensics. “One of the first steps we do is run a frame of the video, or multiple frames of the video, and run them through Google reverse image search,” Tuquero said. Fact-checkers also examine metadata using Chrome extensions and other specialized tools to trace content origins.
Industry experts acknowledge that the technology is advancing faster than many anticipated. John Licato, an associate professor at the USF Bellini College of AI, Cybersecurity, and Computing, and CEO of Actualization AI, previously warned about this rapid evolution in an interview with Paluska.
“The capability of generating realistic images and videos is dramatically better than it was even six months ago,” Licato said, highlighting how even experts in the field have been caught off guard by the pace of development.
Sanders described much of the AI-generated content as “rage bait” specifically designed to inflame existing tensions in an already divided society. “What concerns me, what keeps me up at night, is that AI is getting better,” she said. “We’ve been an intensely polarized, partisan, divided society for many years now. AI is not making that better. AI is making it really easy to inflame tension that already exists.”
The problem extends beyond obviously falsified political content. Even seemingly innocuous AI-generated videos, such as animals performing unrealistic stunts, contribute to the normalization of artificial media, making it harder for users to distinguish between authentic and fabricated content.
“People share those animal videos a lot, and they don’t realize it’s AI,” Tuquero observed. “It’s important to maintain a healthy skepticism of things that evoke an emotional response in you, and be wary of things that might grab your attention really quickly.”
Media literacy experts recommend that consumers approach emotionally provocative content with caution and verify information before sharing. Sanders emphasized the importance of sharing verified information from reputable sources rather than content designed primarily to provoke reactions.
“Let’s share vetted information. Let’s share vetted anecdotes and vetted stories from reputable sources, and not AI creators whose entire page is just specializing in rage bait,” Sanders urged.
Despite the sophisticated technology driving the creation of misinformation, Tuquero noted that basic fact-checking principles often remain effective. “In a lot of these cases, it doesn’t really take much to debunk it,” she said, emphasizing that vigilance and critical thinking remain powerful tools in the fight against misinformation.
As AI technology continues to advance, the partnership between journalists, technology experts, and informed citizens will become increasingly vital in preserving the integrity of public information and discourse in democratic societies.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


6 Comments
This article highlights the challenges journalists face in an era of rapid AI development and the spread of misinformation. PolitiFact’s methodology for identifying and debunking false claims is an important step in the right direction.
It’s concerning to see how AI-generated content can quickly flood social media and sow confusion. Kudos to PolitiFact for their efforts to stay ahead of this evolving threat and maintain the integrity of information.
Interesting to learn about PolitiFact’s fact-checking methodology. It’s crucial that journalists stay on top of the evolving AI misinformation challenge. Robust fact-checking is vital for maintaining trust in public information.
The rise of AI-fueled misinformation is a critical issue that needs to be addressed. PolitiFact’s fact-checking work is vital for combating the spread of false narratives and preserving public trust in information.
AI-generated content is a growing threat, as it can quickly spread false narratives online. I’m glad to see PolitiFact using sophisticated tools to identify and debunk misinformation. Fact-checking is essential for a healthy, informed society.
Absolutely. PolitiFact’s proactive approach to combating AI-fueled misinformation is commendable. Fact-checking is a critical safeguard against the spread of falsehoods, especially in this digital age.