Listen to the article

0:00
0:00

In a stark warning to the government, a parliamentary committee has declared that without proper measures to combat online misinformation, the UK faces the imminent risk of repeating the violent riots that swept across the country during the summer of 2024.

Chi Onwurah, who chairs the Commons Science and Technology Select Committee, criticized what she described as governmental complacency toward digital threats, stating this negligence is actively endangering public safety.

The committee expressed disappointment with the government’s response to their recent report that linked social media companies’ business models to the widespread disturbances following the Southport murders, in which three children lost their lives.

In its official reply to the committee’s findings, the government rejected calls for new legislation specifically targeting generative artificial intelligence platforms. Ministers also declined to intervene directly in the online advertising market—a system the committee claims incentivized the creation and spread of harmful content in the aftermath of the Southport attack.

“The government urgently needs to plug gaps in the Online Safety Act, but instead seems complacent about harms from the viral spread of legal but harmful misinformation,” Onwurah stated. “Public safety is at risk, and it is only a matter of time until the misinformation-fuelled 2024 summer riots are repeated.”

The committee’s report, titled “Social Media, Misinformation and Harmful Algorithms,” highlighted how inflammatory AI-generated images circulated on social media platforms following the Southport stabbings. It warned that modern AI tools have significantly lowered the barriers to creating deceptive, hateful, and harmful content.

In its response published Friday, the government maintained that new legislation is unnecessary, arguing that AI-generated content already falls under the purview of the Online Safety Act (OSA), which regulates material on social media platforms. Officials claimed that introducing further laws at this stage would complicate the act’s implementation.

However, the committee pointed to testimony from Ofcom in which a communications regulator official acknowledged that AI chatbots are not fully covered by the current legislation, and that further consultation with technology industry stakeholders would be necessary.

The government also sidestepped the committee’s recommendation to establish a new body specifically designed to address social media advertising systems that profit from harmful and misleading content. This includes websites that spread false information about the identity of the Southport murderer.

While acknowledging concerns about transparency in online advertising, the government stated it would continue reviewing industry regulations. It referenced an online advertising workforce that aims to increase transparency and accountability, particularly regarding illegal advertisements and protecting children from harmful products and services.

On the matter of researching how social media algorithms amplify harmful content, the government deferred responsibility to Ofcom, stating the regulator was “best placed” to determine whether further research should be undertaken. Ofcom, in its response, acknowledged it had conducted work on recommendation algorithms but recognized the need for broader research across academic and industry sectors.

Ministers also rejected the committee’s recommendation for an annual parliamentary report on online misinformation, arguing such transparency could potentially expose and hinder government operations aimed at limiting the spread of harmful information online.

The UK government distinguishes between “misinformation,” which it defines as the inadvertent spread of false information, and “disinformation,” which involves the deliberate creation and dissemination of false information intended to cause harm or disruption.

Onwurah specifically criticized the government’s positions on AI regulation and digital advertising. “The committee is not convinced by the government’s argument that the OSA already covers generative AI, and the technology is developing at such a fast rate that more will clearly need to be done to tackle its effects on online misinformation,” she said.

She concluded with a challenging question: “Without addressing the advertising-based business models that incentivize social media companies to algorithmically amplify misinformation, how can we stop it?”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.