Listen to the article

0:00
0:00

A Sri Lankan social media entrepreneur has been linked to a sophisticated network of Facebook pages spreading anti-migrant content targeting UK audiences, according to an investigation by The Times.

The operation, described as an “AI factory,” has been generating artificially created content designed to inflame anti-immigrant sentiment across Britain through a coordinated network of social media channels. The investigation revealed that the content is being produced from Sri Lanka, far from the British communities it aims to influence.

At the center of the operation is a Sri Lankan digital entrepreneur who has built a business model around creating inflammatory content aimed at exploiting social and political tensions in the UK. The network operates multiple Facebook pages that appear to be local community forums or news outlets but actually distribute AI-generated propaganda.

The investigation found that these pages regularly publish misleading or entirely fabricated stories about migrants and asylum seekers, often featuring AI-generated images and false narratives designed to stoke fear and resentment. The content frequently portrays immigrants as criminals, portrays refugee accommodations as luxury hotels, and promotes various conspiracy theories about government immigration policies.

Social media analysts note that such operations have become increasingly sophisticated, making it difficult for average users to distinguish between genuine community concerns and artificially manufactured outrage. The content is specifically crafted to trigger emotional responses that drive engagement through shares, comments, and reactions.

“This represents a disturbing evolution in disinformation tactics,” said Dr. Emma Clarke, a researcher at the Oxford Internet Institute who reviewed the findings. “We’re seeing foreign actors using AI to mass-produce culturally tailored content that exploits existing tensions within target countries. The geographic distance makes regulation extremely challenging.”

The business model appears to be driven primarily by advertising revenue, with controversial content generating high engagement rates that translate to increased ad impressions. The Times investigation revealed that some of these pages have amassed hundreds of thousands of followers, creating a significant reach for the misinformation they spread.

Meta, Facebook’s parent company, has faced criticism for failing to detect and remove these networks despite their violation of platform policies regarding coordinated inauthentic behavior and hate speech. When approached by The Times, Meta indicated they were investigating the network and would take appropriate action.

The investigation raises serious questions about the vulnerability of social media platforms to foreign manipulation and the effectiveness of existing content moderation systems. It also highlights how easily AI technology can be deployed to create convincing but entirely fabricated narratives targeted at specific demographics.

UK lawmakers have expressed concern about the findings. “This shows exactly why we need stronger digital regulation,” said MP Catherine West, who sits on the Digital, Culture, Media and Sport Committee. “Foreign actors are exploiting our social divisions for profit, and our current regulatory framework is simply not equipped to handle these sophisticated operations.”

Immigration experts warn that such coordinated campaigns can have real-world consequences, potentially influencing public policy through manufactured outrage. “When artificial narratives dominate public discourse, it becomes increasingly difficult to have rational debates about immigration policy based on facts,” said Jonathan Richards, director of the Migration Policy Institute.

The Sri Lankan connection also demonstrates how digital disinformation has become a global industry, with operators in countries with lower operational costs able to target wealthy nations where advertising revenues are higher.

As authorities consider how to respond to this threat, experts recommend increased digital literacy campaigns to help the public better identify AI-generated content and coordinated influence operations. Meanwhile, The Times investigation has provided platform operators and regulators with detailed evidence about how these networks operate, potentially enabling more effective countermeasures in the future.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. I’m curious to learn more about the business model behind this ‘AI factory’ and how they are able to scale the production of misleading content. What techniques are they using, and how can we counter the spread of this kind of propaganda?

  2. I appreciate the Times investigation for shining a light on this shadowy operation. It’s a good reminder that we should be vigilant about the sources and motives behind online content, even if it appears to be from local community forums or news outlets.

  3. Interesting investigation into AI-driven anti-migrant propaganda targeting the UK. It’s concerning to see such sophisticated disinformation campaigns originating overseas. This highlights the need for better online content moderation and transparency around sponsored political messaging.

  4. Isabella N. Martin on

    While the immigration debate can be polarizing, this type of manipulative, AI-generated content is deeply troubling. Spreading misinformation to inflame social tensions is unethical and corrosive to public discourse. We need more responsible and fact-based discussion around these complex issues.

    • William Hernandez on

      Agreed. Artificial amplification of divisive narratives is a serious threat to social cohesion. Regulators and platforms need to crack down on coordinated disinformation campaigns like this one.

  5. Emma R. Johnson on

    This is a troubling case study in the ways AI can be weaponized for political manipulation. I hope regulators and tech platforms can work together to get ahead of these kinds of coordinated disinformation campaigns before they do even more damage.

  6. As someone who follows the mining and commodities sector, I’m concerned about the potential intersection between this type of online manipulation and issues like resource extraction, land rights, and environmental impacts. Disinformation could be used to obscure important debates in these areas as well.

  7. Oliver Johnson on

    Scary to see how advanced these AI-powered disinformation campaigns have become. We really need robust fact-checking and media literacy efforts to inoculate the public against these kinds of sophisticated propaganda tactics. Holding the perpetrators accountable is also critical.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.