Listen to the article
A Guardian investigation has uncovered a sprawling network of far-right Facebook groups exposing hundreds of thousands of Britons to racist extremism and disinformation, functioning as what experts describe as an “engine of radicalisation.”
Unlike traditional extremist platforms, these groups are primarily moderated by ordinary citizens, many of retirement age, scattered across England and Wales. Despite their seemingly mainstream appearance, the groups have become breeding grounds for unchecked hate speech, conspiracy theories, and anti-immigrant rhetoric.
The investigation comes just weeks after 150,000 people participated in a far-right London protest whose scale caught authorities off guard. Researchers identified these Facebook groups by analyzing profiles of individuals involved in last summer’s riots following the Southport killings of three girls.
After examining more than 51,000 text posts across three of the largest public groups in the network, researchers found hundreds of concerning posts containing misinformation, far-right narratives, racist slurs, and white nativist rhetoric.
“What is new is that the online spaces amplify a lot of these dynamics,” explained Dr. Julia Ebner, a radicalisation researcher at the Institute for Strategic Dialogue. “The algorithmic amplification, the speed at which people can end up in a radicalisation engine… The digital age means that people trust content produced or spread by individual accounts, by influencers regardless of their ideological leanings, more than they tend to trust established institutions.”
The network’s administrators play a crucial role in its expansion. These moderators, primarily middle-aged Facebook users from diverse socioeconomic backgrounds, are responsible for group invitations, content moderation, and sharing information across multiple connected groups.
When approached by The Guardian, most administrators declined to comment. One moderator of six groups with nearly 400,000 combined members claimed far-right users were “deleted and blocked.” However, the investigation revealed numerous examples of extreme content persisting across the network.
The language directed at immigrants is particularly vitriolic, with dehumanizing terms like “criminal,” “parasites,” “primitive,” and “lice” appearing regularly. Muslims face similar characterizations, described as “barbaric,” “an army,” “archaic,” “medieval,” and “not compatible with the UK way of life.”
One post stated: “We need a humongous nit comb. To scrape the length and breast [sic] of the UK, to get rid of all the blood sucking lice out of our country once and for all!!”
Another claimed: “Our own government has put us all at risk by allowing these primitive minded people onto our land.”
The investigation sheds new light on how mainstream platforms like Facebook have become vectors for far-right radicalization. This stands in contrast to previous patterns where such content primarily flourished on fringe platforms like 4chan, Parler, and Telegram, which typically attracted younger audiences.
More concerning is the demographic shift: the investigation identified over 40 group administrators across three analyzed groups, with a significant number being men and women over age 60. This suggests extremist content is reaching demographics previously considered less susceptible to online radicalization.
The combined membership across the network totaled 611,289 as of July 29, 2025, though this figure likely includes some double-counting as individuals can join multiple groups.
When presented with The Guardian’s findings, Meta, Facebook’s parent company, reviewed the three analyzed groups and determined the content did not violate its hateful conduct policy. This response raises fresh questions about the effectiveness of platform moderation policies, particularly following Meta’s announcement of sweeping content moderation changes earlier this year.
Experts warn these online spaces create an environment conducive to real-world extremism, potentially contributing to incidents like last year’s summer riots. As online and offline extremism continue to intertwine, the role of mainstream social platforms in addressing these challenges faces increasing scrutiny.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
The findings of this investigation are troubling. We must be vigilant about the potential for social media to be exploited by fringe groups and work to build more resilient, inclusive online communities.
Well said. Tackling the spread of disinformation and extremism on social media requires a multi-pronged approach involving platform transparency, user empowerment, and collaborative policy solutions.
It’s alarming to see how far-right narratives can spread rapidly through unmoderated online communities. We need to better understand the dynamics driving this radicalization and find ways to counter it effectively.
I agree, this investigation sheds important light on the risks of unchecked extremism on social media platforms. Policymakers should take note and work on solutions to address this pressing issue.
This investigation sheds light on a deeply concerning trend. We need to understand how these far-right narratives gain traction on social media and develop effective strategies to counter the spread of such extremist ideologies online.
This is a sobering reminder of how social media can be weaponized to amplify harmful ideologies. Robust content moderation and user education are essential to prevent such platforms from becoming echo chambers of disinformation and radicalization.
Absolutely. The scale of the problem revealed by this investigation is truly concerning. Social media companies, governments, and civil society must collaborate to develop comprehensive strategies to counter online extremism.
This is deeply concerning. Social media echo chambers can quickly spread disinformation and radicalization, even among seemingly ordinary users. Moderation is key to preventing these groups from becoming breeding grounds for hate speech and extremism.
You’re right, this highlights how social media can be misused to fuel division and extremism. Responsible platform governance is crucial to address this challenge.