Listen to the article
Artificial intelligence tools are increasingly being misused to spread misinformation during breaking news events by “enhancing” images with fabricated details, according to open-source intelligence experts.
Giancarlo Fiorella, director of research and training for investigative group Bellingcat, told CTV News Channel on Tuesday that AI image upscaling tools played a significant role in spreading false information following recent shootings in Minneapolis, including the fatal shooting of Renee Good by U.S. Immigration and Customs Enforcement (ICE) agents.
“What we saw was a large number of images online that had been so-called ‘upscaled’ with AI tools by ordinary people, who were wanting to find out exactly what happened in these cases,” Fiorella explained. However, rather than clarifying events, these manipulations often introduce entirely fabricated elements.
The problem, Fiorella notes, is that AI upscaling tools can “hallucinate” information – creating visual details that never existed in the original images. These fabricated details can then be mistaken for factual evidence, particularly during emotionally charged breaking news situations.
One prominent example involved attempts to reveal the face of an ICE officer allegedly involved in Good’s shooting by removing his face mask using AI tools. “The platform has no way of knowing what that individual actually looks like,” Fiorella said. “It fills in the missing data with what it thinks his face could look like based on other images of people that these platforms have been trained on.”
The result was a convincing but entirely fictional face that led to real-world consequences. According to NPR reports, this altered image contributed to the false identification of the alleged shooter as Steve Grove, a publisher at the Minnesota Star Tribune, who subsequently became the target of online harassment.
Court documents later identified the actual ICE officer involved as Jonathan Ross, demonstrating how AI hallucinations can lead to mistaken identity and potential harm to innocent individuals.
The Star Tribune publicly denounced what it described as a coordinated online disinformation campaign, emphasizing that the ICE agent had no connection to the newspaper. The publication urged the public to rely on reporting from trained journalists rather than AI-generated content.
A similar pattern emerged following the fatal shooting of Alex Pretti, another incident in Minneapolis. After the U.S. Department of Homeland Security released a photo of a confiscated weapon, online users attempted to match it to blurry video footage using AI enhancement tools. Instead of revealing new details, these tools generated sharp, detailed images of a weapon that bore little resemblance to the original.
“This is something that we’re seeing more and more of because of the availability of these AI upscaling tools,” Fiorella warned.
The proliferation of such misleading content raises questions about responsibility and content moderation. Fiorella noted that much of the burden falls on social media platforms, which vary widely in their approaches to identifying and labeling AI-generated content.
“It’s mostly up to the platforms themselves to decide whether or not they want to tag this kind of content, how they want to tag it, and how strict they want to be with the rules for tagging,” he said.
Some platforms have implemented labeling systems for AI-generated content, while others rely on community-driven moderation programs like Twitter/X’s Community Notes. However, enforcement remains inconsistent across different platforms, creating an environment where misleading AI-enhanced images can spread rapidly during breaking news events.
The incidents in Minneapolis highlight a growing challenge in the media landscape, where widely available AI tools enable almost anyone to alter images in ways that appear authentic but may contain completely fabricated elements. This technological capability, combined with the emotional intensity surrounding violent incidents, creates fertile ground for misinformation that can have serious consequences for individuals wrongly identified through AI hallucinations.
As AI image enhancement tools become more accessible, the line between genuine photographic evidence and AI-generated speculation continues to blur, complicating efforts to establish facts during breaking news events and potentially endangering innocent individuals caught in the crossfire of online speculation.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
Troubling to see AI being misused to spread misinformation. Fact-checking and source verification are so important, especially during breaking news events. Fabricated details can be easily mistaken for truth and sow further confusion.
Manipulating images through AI upscaling to create false evidence is a concerning trend. Fact-checking and source verification are more important than ever as these tools become more accessible. We must be wary of misinformation, even if it appears convincing.
This study highlights the need for greater education and awareness around the limitations and potential pitfalls of AI-powered tools. As they become more ubiquitous, users must be equipped to critically assess the information they encounter online.
The Bellingcat findings are a sobering reminder that AI can be a powerful force for both good and ill. As the technology advances, we must remain vigilant and proactive in addressing the potential for abuse.
Agreed. Responsible AI development and deployment should be a priority for policymakers, tech companies, and the public. Striking the right balance between innovation and safeguards is critical.
The proliferation of AI-generated misinformation is a worrying development. It’s a stark reminder that technological progress must be accompanied by robust safeguards and a commitment to ethical, responsible use of these powerful tools.
This highlights the double-edged nature of AI technology. While it can enhance and augment information, it can also be exploited to mislead. Responsible development and deployment of these tools is crucial to maintain public trust.
Absolutely. Transparency and accountability around AI systems are key to mitigating their misuse. Rigorous testing and oversight are needed to ensure these powerful tools are not weaponized against the public.