Listen to the article

0:00
0:00

UK Strengthens Online Laws to Combat Foreign State Disinformation

Social media platforms will soon face a legal obligation to proactively identify and remove disinformation from foreign state actors that threatens UK national security, under amendments to the country’s internet safety legislation.

The UK government announced plans to link the National Security Bill with the Online Safety Bill, creating powerful new measures that will require tech companies to tackle state-sponsored disinformation campaigns or face severe penalties, including fines of up to ten percent of their annual global turnover.

The amendment adds a new Foreign Interference Offence to the list of priority offences in the Online Safety Bill, targeting malicious content aimed at undermining democratic institutions or interfering with legal processes in the UK.

Digital Secretary Nadine Dorries highlighted Russia’s invasion of Ukraine as a prime example of the threat, stating: “The invasion of Ukraine has yet again shown how readily Russia can and will weaponise social media to spread disinformation and lies about its barbaric actions, often targeting the very victims of its aggression.”

“We cannot allow foreign states or their puppets to use the internet to conduct hostile online warfare unimpeded,” Dorries added.

Under the new provisions, tech platforms including social media networks, search engines, and websites allowing user-generated content will need to conduct risk assessments specifically for content that falls under the Foreign Interference Offence. They must implement proportionate systems to minimize users’ exposure to such material.

Security Minister Damian Hinds emphasized that online information operations have become a fundamental component of state threats. “Disinformation is often seeded by multiple fake personas, with the aim of getting real users, unwittingly, then to ‘share’ it,” Hinds explained. “We need the big online platforms to do more to identify and disrupt this sort of coordinated inauthentic behaviour.”

The legislation will target content from fake accounts set up by entities acting on behalf of foreign states to influence democratic processes such as elections, manipulate court proceedings, or distribute hacked information to undermine UK institutions.

Tech companies may need to implement measures to prevent the creation of large-scale fake accounts or combat bot networks used in malicious disinformation campaigns. When moderating content, platforms will be required to assess whether there are reasonable grounds to believe specific material constitutes state-sponsored disinformation involving deliberate misrepresentation.

The UK communications regulator Ofcom will develop codes of practice to guide platforms in fulfilling these duties and will have enforcement powers including significant financial penalties and the ability to block non-compliant websites.

The broader Online Safety Bill already requires companies to address illegal content and misinformation that harms individuals. Platforms likely to be accessed by children must protect young users from harmful misinformation, while major tech platforms must set clear terms of service regarding harmful content accessible to adults.

The government emphasized that the legislation includes strong protections for freedom of expression, with specific safeguards against over-removal of content. Platforms will be required to maintain effective reporting and appeals mechanisms so users can challenge content removal decisions they believe are unjustified.

The amendment comes as democratic nations worldwide grapple with increasingly sophisticated information warfare tactics deployed by hostile states. Russia’s invasion of Ukraine has highlighted the urgent need for more robust defenses against coordinated disinformation campaigns that can polarize societies, undermine trust in institutions, and interfere with democratic processes.

The UK’s approach represents one of the most comprehensive regulatory frameworks globally for addressing state-backed disinformation while attempting to balance security concerns with free speech protections.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

7 Comments

  1. It’s good to see the government responding to the growing threat of state-backed information warfare. Robust measures like these are essential for preserving democratic integrity.

  2. Jennifer Y. Davis on

    Strengthening online safety laws to tackle state-sponsored disinformation is a smart move. Platforms must be held accountable for enabling the spread of harmful content that undermines national security.

  3. Russia’s disinformation campaigns during the Ukraine invasion highlight the urgent need for stronger online safeguards. Kudos to the UK for taking proactive steps to protect its citizens.

  4. Patricia G. Jones on

    Glad to see the UK taking a firm stance against foreign state disinformation. Combating such malicious propaganda is crucial for protecting democratic institutions and public discourse.

  5. Isabella White on

    While I understand the motivation, I worry these new laws could lead to overreach and restrictions on legitimate speech. Careful implementation will be key to balance security and civil liberties.

    • Olivia Williams on

      That’s a fair concern. Oversight and transparency around content moderation decisions will be important to ensure these laws are not abused.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.