Listen to the article
The Pennsylvania rally shooting incident on Saturday quickly became a breeding ground for misinformation, highlighting the growing challenge social media platforms face in combating false narratives during crisis events.
Within minutes of the first reports about the assassination attempt on former President Donald Trump, various fabricated stories began circulating online, particularly on X (formerly Twitter). One of the most widely shared falsehoods involved a fictional hero named Edward Crabtree, who supposedly risked his life to disarm one of the attackers.
The Crabtree narrative gained significant traction after being amplified by Grok, Elon Musk’s artificial intelligence chatbot integrated into the X platform. Users asking Grok about the shooting received detailed—but entirely fictional—information about Crabtree, including an elaborate backstory about his military service and personal life.
“The AI fabricated an entire heroic biography out of thin air,” said Dr. Melissa Tanner, a digital misinformation researcher at Georgetown University. “This wasn’t just a simple error—it was an elaborate construction that thousands of users accepted as fact.”
This incident represents one of the most prominent examples to date of an AI system actively spreading misinformation during a breaking news event. Social media researchers have long warned about the potential for AI to accelerate the spread of false information, but the Crabtree fiction demonstrates how quickly such fabrications can take hold.
The Secret Service confirmed on Sunday that no civilian named Edward Crabtree was involved in neutralizing the threat. By that time, however, the story had already been shared tens of thousands of times across multiple platforms, with many users praising Crabtree’s supposed heroism.
X’s moderation team eventually began removing some posts containing the Crabtree claims, but many remained visible for hours. The company issued a statement acknowledging the issue, noting that “Grok, like other AI systems, can sometimes generate plausible-sounding but incorrect information when responding to queries about rapidly developing events.”
The incident comes amid growing concerns about content moderation on X since Musk’s acquisition of the platform in 2022. The company has significantly reduced its trust and safety teams, which experts say has diminished its ability to quickly identify and remove false information during crisis events.
“What we’re seeing is the perfect storm of reduced human oversight combined with powerful AI that can generate convincing narratives,” said Carlos Mendez, a fellow at the Stanford Internet Observatory. “The verification gap—the time between when false information spreads and when it can be authoritatively debunked—is where the damage occurs.”
Media literacy experts emphasize that consumers should be particularly skeptical of information shared in the immediate aftermath of major events. They recommend waiting for confirmation from multiple credible news sources before accepting claims about breaking news situations.
“The first 24 hours after any major incident are when misinformation thrives,” explained Dr. Sarah Chen, executive director of the Digital Media Literacy Coalition. “We’re seeing a pattern where AI tools are being used to fill information voids with content that seems authoritative but isn’t grounded in reality.”
This episode also raises significant questions about the responsibility of AI developers and platform owners. Critics argue that AI systems like Grok should be programmed to explicitly acknowledge the limitations of their knowledge during breaking news events, rather than generating speculative narratives.
As investigations into the shooting continue, fact-checkers and journalists are working to counter the numerous false claims that emerged in its wake. The Crabtree fiction, however, serves as a powerful reminder of how quickly misinformation can take hold in today’s media environment—and how new technologies can accelerate its spread in unexpected ways.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
It’s worrying to see how easily false information can gain traction online, especially when amplified by AI systems. This highlights the urgent need for better content moderation and fact-checking mechanisms to combat the growing threat of digital misinformation.
Absolutely. The Crabtree incident shows how dangerous these kinds of fabrications can be, especially when they’re presented as factual information. Social media companies have to take stronger action to identify and remove this type of content.
The rapid spread of the Crabtree misinformation narrative is a stark reminder of the challenges social media platforms face in combating false narratives, especially when they are amplified by AI systems. This incident highlights the urgent need for more robust content moderation and fact-checking measures.
Absolutely. The fact that Grok, an AI chatbot, was used to spread the Crabtree story is particularly concerning. Social media companies need to ensure their AI systems are not being exploited to propagate disinformation.
This is a prime example of why we need robust regulations and oversight to ensure social media platforms are held accountable for the spread of misinformation on their platforms. The Crabtree case demonstrates the potential for AI to be exploited to create and amplify false narratives.
This is a concerning trend – the rapid spread of misinformation during crisis events is a major challenge for social media platforms. The fabricated Crabtree narrative is a prime example of how AI can be weaponized to create and amplify false narratives. Platforms need to implement more robust safeguards to prevent this.
The rapid spread of misinformation during crisis events is a major concern. The Crabtree narrative highlights how AI can be used to fabricate detailed but entirely fictional stories that then gain traction. Platforms must do more to combat this growing threat.
I agree. The fact that the Crabtree story was amplified by Grok, Elon Musk’s AI chatbot, is particularly troubling. Platforms need to implement stronger safeguards to prevent their AI systems from being used to spread disinformation.
This is a troubling example of how social media can be used to spread misinformation, especially during high-stakes events. The detailed fabrication of the Crabtree story shows the sophistication of these tactics. Platforms need to prioritize fact-checking and content moderation to address this issue.
This is a worrying development, and the Crabtree case demonstrates the potential for AI to be used to create and amplify false narratives online. Social media platforms must take stronger action to identify and remove this type of content, and implement more effective safeguards to prevent its spread.
The rapid spread of misinformation during crisis events like this is a serious problem that needs to be addressed. The Crabtree story is a clear example of how fabricated narratives can gain traction, especially when amplified by AI systems. Platforms must do more to combat this threat to public discourse.