Listen to the article
In late summer 2024, the United Kingdom experienced a wave of violent riots triggered by the tragic killing of three young girls in Southport. New research from the London School of Economics has revealed how artificial intelligence-generated misinformation played a critical role in fueling this civil unrest by spreading false narratives and racist conspiracy theories.
The research, funded by the LSE Urgency Fund, examined how social media platform X (formerly Twitter) amplified visual representations of racist conspiracy theories that directly contributed to the violence. While Telegram served primarily as an organizational tool for rioters, X’s recommendation algorithms dramatically increased the visibility of inflammatory content.
Researchers manually collected and analyzed posts from two verified X accounts between July 4 and August 4, 2024—one belonging to a UK-based far-right political party with white nationalist connections, and another categorized as “Media & News” that was among the first to spread false information about the Southport attacker’s identity. Both accounts displayed the platform’s blue verification checkmark.
The study identified four key ways X’s systems undermined democratic stability during this period.
First, posts featuring racist conspiracy theories received approximately 30% more algorithmic amplification than other content. Theories like “White Genocide” and “The Great Replacement”—which claim white populations are being deliberately replaced through immigration and demographic change—were not only visually expressed but significantly boosted by X’s recommendation systems.
“Both theories evoke an apocalyptic fate and therefore appeal to a crusading mentality that has occasionally culminated in violence,” the researchers noted.
Second, AI-generated images proved particularly effective at spreading racist narratives. On one of the examined accounts, 39 posts containing AI-created visuals representing racist conspiracy theories attracted nearly three times more views and engagement than other content. One such post received 11 million views in October 2024.
These AI-generated images frequently depicted Muslim men as sexual predators targeting white British girls. Following the Southport attack, the visual rhetoric shifted toward glorifying white British “heroes” confronting racialized “enemies,” effectively creating memes that fantasized violence against Muslim immigrants.
“X hasn’t just enabled its users to freely share Generative AI-created images featuring stereotypes that symbolically establish a hierarchy between White and Black individuals, Christians and Muslims, natives and so-called invaders,” the researchers concluded. “Its algorithms have actively amplified this content, further reinforcing racist and Islamophobic fantasies that have historically fuelled violence.”
Third, the platform facilitated the spread of outright misinformation. Videos were frequently presented with manipulated captions suggesting evidence of an “invasion,” failed integration, or “Islamisation” of British institutions like the police and parliament. None of these posts were flagged as false information by the platform.
Fourth, X’s subscription-based verification system granted legitimacy to accounts spreading conspiracy theories. The blue checkmark, once earned through rigorous verification, now available through paid subscription, continues to be perceived as a marker of credibility. This verification comes with “largest reply prioritisation,” further amplifying questionable content.
Of the 388 posts analyzed from one “Media & News” account, 35 came from verified accounts, with 14 featuring images promoting racist conspiracy theories. These verified accounts were based across Europe, including the UK, Hungary, Spain, and France.
The researchers argue that X’s algorithmic recommendation systems, relaxed content moderation, integration with AI image generation tools, and current verification model have transformed the platform into what some observers have called a “polarisation engine” that undermines democracy.
“The combination of algorithms, racism, Islamophobia, fake news and conspiracy theories poses a direct threat, not only to marginalised groups but to democracy itself,” the study warns.
The researchers call for more effective regulation of social media platforms and greater citizen responsibility in addressing these issues, suggesting that collective action could help mitigate the dangerous influence of algorithmically amplified extremist content.
As technology continues to evolve faster than regulatory frameworks, the Southport riots serve as a sobering case study of how AI-generated misinformation can leap from digital screens into real-world violence, posing significant challenges to social cohesion and democratic institutions.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
The findings are sobering. While social media platforms enable important dialogue, their amplification of misinformation can have devastating real-world consequences. Policymakers and platforms must work together to find solutions that balance free speech with public safety.
Interesting findings on the role of social media misinformation in fueling the UK riots. It’s concerning how easily false narratives can spread and contribute to real-world violence. Platforms need to do more to address this issue proactively.
Agreed. Responsible content moderation and algorithmic accountability are critical to prevent the amplification of harmful misinformation, especially on verified accounts. The implications for public safety are serious.
The role of verified accounts in spreading misinformation is particularly concerning. Platforms must re-evaluate their verification processes and take stronger action against abuse. Fact-checking and source transparency should be core platform features, not afterthoughts.
This research highlights the importance of media literacy and critical thinking when consuming online information, especially around sensitive topics. Verifying sources and fact-checking claims is crucial to prevent the spread of dangerous misinformation.
You make a good point. Empowering users to assess the credibility of content is just as important as platform-level interventions. Developing those skills can help build resilience against manipulative information campaigns.
This research underscores the complex challenges of content moderation in the digital age. Balancing free expression, transparency, and accountability is no easy feat. I’m curious to learn more about the specific policy recommendations that emerge from this study.
Good point. Effective solutions will likely require a multifaceted approach, including platform reforms, digital literacy initiatives, and potential regulatory frameworks. Striking the right balance is crucial to mitigate harm while preserving democratic principles.