Listen to the article
In the aftermath of the tragic UPS cargo plane crash in Louisville that claimed 14 lives, a disturbing trend emerged alongside the grief and shock: a flood of AI-generated disinformation exploiting the disaster for clicks and engagement.
As authorities worked to gather facts about the crash, bad actors quickly filled the information vacuum with fabricated content. Artificial intelligence tools generated fake images showing burning aircraft with UPS livery, false claims about celebrity deaths, and entirely fictional narratives about the crash’s cause—all before investigators had released any official findings.
Trevor Smith, a pilot who runs the YouTube channel “Pilot Debrief” under the nickname “Hoover,” noticed this troubling pattern while monitoring updates on the crash. His attention was drawn to an elaborate post circulating online that presented a detailed but completely fabricated analysis of the accident.
“There was a very long, detailed theory about, you know, essentially, fuel lines being cut, and that’s what leads to this and that and it provided an extreme amount of technical detail,” Smith recounted. “But as soon as I started reading it, I’m like, you know this smells like AI.”
This discovery led Smith to identify numerous other AI-generated falsehoods spreading across social media platforms. One particularly egregious example was a fake video, shared over a thousand times, depicting fictional firefighters battling nonexistent flames beside a manufactured image of destroyed aircraft fuselage. Despite the account claiming the content was for “awareness and educational purposes,” many users appeared to accept it as authentic.
“It’s like, this is so ridiculous, but then you start looking at the comments on it, and people are actually believing this,” Smith said. “That was just extremely frustrating to me.”
The false content ranged from fabricated news articles claiming relatives of celebrities like Kid Rock, Keith Urban, and Bob Dylan had died in the crash to completely manufactured scenarios about the accident’s cause and aftermath.
Imran Ahmed, who heads the Center for Countering Digital Hate, explains that AI tools have dramatically accelerated the spread of misinformation. This becomes particularly dangerous following disasters, when the public hungers for information and is most vulnerable to deception.
“Disasters are tragic enough on their own, but they’re actually made worse by allowing AI-generated and algorithmically amplified lies to spread unchecked and potentially create real-world harm for people on the ground, victims’ families, but also local communities,” Ahmed stated.
His organization analyzed hundreds of misleading posts following recent disasters like the Los Angeles wildfires and Hurricane Helene. Their findings revealed that approximately 98% of fake posts across platforms like X, Meta, and YouTube lacked any moderation flags warning users about potentially false information.
Ahmed believes lawmakers have failed to properly regulate how social media companies handle AI disinformation. Without significant legal penalties, he argues, tech companies have little incentive to address the problem effectively.
“And so, as a result, what we have are people that pay lip service to the idea of safety,” Ahmed explained. “They understand that it matters, but actually do very little in practice.”
The problem extends beyond clearly fabricated content. After the UPS crash, X’s AI assistant Grok incorrectly claimed that a genuine photo showing Kentucky Governor Andy Beshear amid plane debris was actually from a previous disaster, casting doubt on factual reporting.
Julia Feerrar, head of digital literacy initiatives at Virginia Tech, offers practical advice for navigating this complex information landscape. She recommends pausing when encountering content that triggers strong emotional reactions and checking whether trustworthy sources are reporting the same information.
“The number one thing I tell people is to slow down and pause when we are seeing information that sparks a big emotional reaction,” Feerrar advised.
She acknowledges the time constraints during emergencies and suggests prioritizing fact-checking for information that might prompt action, such as making donations or seeking shelter.
Feerrar also emphasizes the importance of compassion toward those who fall for misleading content, noting that AI-generated disinformation is specifically designed to exploit human psychology.
“What are those moments of big emotions, or when you’re making a decision based on the information that you’re seeing?” she said. “That’s a time to take some extra time.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
It’s disheartening to see tragedy exploited in this way. My heart goes out to the victims’ families and the entire Louisville community. I hope they can find some solace and closure as the investigation progresses, without being further disrupted by false narratives.
The level of technical detail in some of these AI-generated claims is concerning. It suggests the misinformation is becoming more sophisticated and harder to detect. I wonder if there are any technological solutions, like digital watermarks or source verification, that could help combat this issue.
This is a sobering example of the potential downsides of AI technology. While the tools can be used for many beneficial purposes, they can also be exploited by bad actors to spread disinformation and sow confusion. Responsible development and deployment of these systems is essential.
The use of AI to generate false narratives and imagery is a concerning trend that we’ll likely see more of. While the technology has many beneficial applications, bad actors can leverage it to spread disinformation quickly and at scale. Vigilance from the public, media, and authorities is needed.
The rapid spread of this kind of AI-generated misinformation is a troubling development. I wonder if there are any technological solutions, like digital signatures or source verification, that could help combat the issue. It’s crucial that the public has access to reliable, fact-based information during emergencies.
I’m curious to know if the authorities have identified any specific individuals or groups behind the creation and distribution of this fabricated content. Holding the perpetrators accountable could be an important deterrent for future misinformation campaigns.
This is a stark reminder of the importance of media literacy and critical thinking when consuming information, especially during breaking news events. As individuals, we all have a role to play in verifying claims and stopping the spread of disinformation, whether it’s AI-generated or not.
I’m curious to learn more about the specific AI tools and techniques being used to create this fabricated content. Understanding the technical capabilities and limitations of these systems could help develop more effective countermeasures. Do we know if the content is being generated autonomously or with human intervention?
Tragic events like this plane crash are prime targets for exploitation by misinformation campaigns. The desire for rapid updates and explanations creates an information vacuum that bad actors try to fill. I hope investigators can work quickly to establish the facts and shut down any false narratives.
This is very troubling to hear about the spread of AI-generated misinformation following the tragic UPS plane crash. It’s important that the public has access to accurate, fact-based information during emergencies. I hope authorities are able to quickly identify and remove any fabricated content.
Pilots like Trevor Smith play a crucial role in combating this type of misinformation. Their technical expertise and careful analysis can help separate truth from fiction in the aftermath of incidents like this. I’m glad to see responsible voices stepping up to provide reliable information.