Listen to the article
AI’s Role in the Spread of Misinformation During Ireland’s Fuel Protests
For decades, a persistent rumor claimed that Brazilian football legend Socrates had played for University College Dublin’s football club. Some versions suggested he played for the graduate club Pegasus, while others claimed he merely studied medicine at UCD but didn’t play football. Still others insisted he attended the Royal College of Surgeons in Ireland but declined to join their football team after seeing its poor quality.
None of these stories were true. Yet they persisted through decades of retellings, harmless as they were entertaining.
This phenomenon—the fog of misinformation that clouds facts regardless of original intentions—took on a more sinister dimension during last week’s fuel protests across Ireland, where artificial intelligence significantly amplified the problem.
As tensions escalated on Irish roads, distinguishing fact from fiction became increasingly difficult. An Garda Síochána was forced to issue warnings about fake memos circulating online. Misleading and recycled videos and images spread rapidly, further inflaming an already volatile situation.
The Defence Forces found themselves issuing clarifications that their activities in Limerick were merely routine training exercises for United Nations Interim Force in Lebanon (UNIFIL) operations—not preparations for some kind of domestic takeover, as some social media posts had suggested.
Even Ciarán Mullooly, a Member of the European Parliament for Independent Ireland and former journalist, referenced misleading imagery during an EU meeting before later retracting his statements. Meanwhile, actor Kevin Sorbo shared a video from an anti-immigration protest from the previous year, incorrectly presenting it as footage from the recent fuel protests.
What makes today’s misinformation landscape particularly dangerous is how AI has transformed both the speed and sophistication of false narratives. AI chatbots like Grok can inadvertently incorporate false narratives circulating online into what they present as straightforward summaries of events, giving misinformation an air of algorithmic authority.
In the pre-AI era, misinformation spread more slowly, allowing time for corrections to catch up. The natural mutation of stories took time, and falsehoods often burned themselves out before gaining widespread traction. During the COVID-19 pandemic, for instance, rumors about imminent government lockdowns were common, but their inconsistencies made them easier to identify as false.
AI has not only accelerated the spread of rumors but has weaponized the fog of uncertainty that accompanies tense situations. The same rumor can now be quickly repurposed with convincing images and videos tailored to target multiple audiences in ways each would find credible, making confirmation bias far more likely to influence public opinion.
The goal behind spreading misinformation isn’t necessarily to cause dramatic shifts in media consumption patterns. Rather, it’s the gradual erosion of trust in professional media outlets and democratic institutions—a pattern that has already damaged other democracies worldwide.
Combating this trend presents significant challenges, as those spreading misinformation deploy simple, emotionally resonant messages backed by increasingly sophisticated technology.
While completely eliminating misinformation is impossible, reducing its impact begins with applying common sense. Low-stakes stories like the Socrates-UCD connection or the exaggerated tale of Andre the Giant and Samuel Beckett persist because they’re entertaining and inconsequential.
For consequential matters like fuel protests and government responses, however, maintaining a healthy skepticism becomes crucial. With AI-generated content becoming increasingly prevalent, citizens must exercise particular caution when encountering information that confirms their existing beliefs.
The true danger lies not just in misinformation itself, but in unwittingly becoming part of its dissemination network, allowing false narratives to continuously reframe public discourse and create deeper societal divisions.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
While the Socrates urban legend is a relatively harmless example, the real-world consequences of misinformation during the Irish protests are deeply concerning. This underscores the urgent need for solutions to address this growing challenge to public trust and social stability.
The impact of misinformation during the Irish fuel protests highlights how quickly falsehoods can spiral out of control, especially in sensitive political and social situations. Rigorous verification of information sources is clearly needed to prevent such scenarios.
Absolutely. Policymakers and tech companies will need to work together to develop more robust systems for identifying and containing the spread of misinformation online.
The role of AI in amplifying misinformation is particularly worrying. As these technologies become more advanced, the potential for them to be weaponized to spread false narratives will only increase. Rigorous regulation and oversight of AI systems will be critical going forward.
Agreed. Policymakers and tech companies will need to work closely to ensure AI is developed and deployed responsibly, with strong safeguards against misuse.
This is a concerning trend. Misinformation can quickly erode public trust and exacerbate existing tensions. It’s crucial that we find ways to combat the spread of false narratives, especially when they’re amplified by new technologies like AI.
Agreed. Fact-checking and media literacy initiatives will be key to helping the public navigate the information landscape more effectively.
This is a complex issue with no easy solutions. But the stakes are high, as misinformation can have serious real-world consequences for public safety and social cohesion. Innovative approaches from multiple stakeholders will be needed to address this growing threat.