Listen to the article
UK Parliamentary Committee Calls Online Safety Act Insufficient, Urges Stronger Measures
A report released today by the Science, Innovation and Technology Committee (SITC) criticizes the UK’s Online Safety Act (OSA) as inadequate in addressing the full spectrum of online harms. While acknowledging the OSA as a positive first step, the committee warns that substantially more robust measures are needed to protect the public.
The parliamentary committee has outlined five core principles they believe should underpin a strengthened online safety regime: public safety, free and safe expression, responsibility for content, user control over content and data, and technological transparency.
In a direct reference to last summer’s civil unrest, the report points to social media’s role in amplifying misinformation that fueled riots across the UK in 2024. The committee states that social media platforms “often enabled or even encouraged” the viral spread of misleading and hateful content through their recommendation algorithms – potentially profiting from the turmoil through their advertising and engagement-driven business models.
A critical failing of the current OSA, according to the committee, is its inability to address “legal but harmful content” that recommendation algorithms can amplify to millions of users within hours. This regulatory gap leaves the public vulnerable to manipulation and misinformation campaigns that may not cross legal thresholds but can nevertheless cause significant social harm.
“Social media companies are not just neutral platforms but actively curate what you see online, and they must be held accountable,” said Dame Chi Onwurah MP, Chair of the SITC. The report recommends imposing new duties on platforms to deprioritize content identified as misleading by independent fact-checkers.
The committee expressed frustration at the lack of transparency from tech companies regarding their algorithmic systems, noting that effective regulation is impossible without understanding how these recommendation engines function. As a remedy, the report calls for government-commissioned independent research to inform the development of new standards and regulatory requirements.
Another significant shortcoming highlighted in the report is the OSA’s inadequate coverage of generative AI technologies, which have rapidly evolved since the act’s conception. The committee urges immediate legislative action to address the unique challenges posed by AI content generation platforms, which can create convincing but false information at unprecedented scale.
The report also identifies confusion between regulators and government departments over responsibility for AI oversight and misinformation control – a situation that requires urgent clarification to prevent harmful content from slipping through regulatory cracks.
The committee’s investigation uncovered concerning dynamics in the digital advertising ecosystem, where platform business models inherently reward engaging content regardless of its veracity or potential harm. This creates perverse incentives across the online landscape, with both platforms and advertisers described as “unable or unwilling” to address the monetization of harmful content.
“Social media can undoubtedly be a force for good, but it has a dark side,” Dame Onwurah noted. “The viral amplification of false and harmful content can cause very real harm – helping to drive the riots we saw last summer. These technologies must be regulated in a way that empowers and protects users, whilst also respecting free speech.”
The committee emphasized that their recommendations are not intended to censor legal expression, but rather to impose proportionate restrictions on the algorithmic amplification of content that independent fact-checkers have identified as misleading.
The report comes amid growing international concern about social media’s societal impact and represents one of the most detailed parliamentary examinations of online safety regulation in the UK to date. The committee has indicated it will continue investigating online harms, with particular focus on impacts to young people, in the coming months.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


6 Comments
The UK’s Online Safety Act seems to have fallen short in addressing the serious issue of online misinformation. Stronger measures are clearly needed to protect the public and ensure responsible content moderation by platforms.
Interesting to see the parliamentary committee’s recommendations for a strengthened online safety regime. Transparency, user control, and technological accountability should be core principles guiding any effective legislation.
This report underscores the urgent need for more robust online safety regulations. Misinformation has become a serious threat to public discourse, and social media platforms must be compelled to take greater responsibility.
This parliamentary report rightly highlights the role social media played in amplifying false narratives and fueling unrest during the 2024 riots. Platforms must be held accountable for the harms caused by their engagement-driven algorithms.
I agree, the lack of transparency around platform algorithms is a major concern. More oversight and user control over content is essential.
While the Online Safety Act was a positive first step, it’s concerning that it fails to fully address the full spectrum of online harms. Strengthening the legislation with clear principles around public safety and free expression is crucial.