Listen to the article

0:00
0:00

The government is facing renewed pressure to strengthen online safety laws after rejecting key recommendations designed to curb the viral spread of misinformation, despite agreeing with most of MPs’ findings on the scale of the problem.

The Science, Innovation and Technology Committee published the government and Ofcom’s responses to its July report today, which concluded that the Online Safety Act (OSA) falls short in addressing algorithmic amplification of false content, leaving users vulnerable to rapidly spreading misinformation – much of which is being accelerated by generative AI technologies.

While both the government and Ofcom acknowledged the committee’s assessment that misinformation poses significant risks to public safety and democracy, ministers declined to adopt several major recommendations that would have strengthened the regulatory framework.

Most notably, the government rejected calls to extend online safety legislation to explicitly cover generative AI platforms, despite the committee’s argument that these technologies can produce and distribute large volumes of false content at unprecedented speed and scale. The government maintained that AI-generated content already falls under the OSA’s jurisdiction – a position that directly contradicts Ofcom’s earlier testimony to the committee.

During previous hearings, the communications regulator had stated that the legal status of generative AI under the current legislation was “not entirely clear” and suggested additional regulatory frameworks might be necessary to address these emerging technologies effectively.

MPs also highlighted a fundamental issue at the heart of online misinformation: the digital advertising business models that financially reward platforms when harmful content goes viral. The committee emphasized that meaningful regulation cannot occur without addressing the economic incentives that drive social media companies to algorithmically amplify sensationalist and often misleading content.

While acknowledging the connection between advertising revenue and content amplification, the government stopped short of committing to reform in this area, stating only that the issue would be kept “under review” – a response critics view as inadequate given the urgent nature of the problem.

Committee chair Dame Chi Onwurah MP expressed frustration at the government’s reluctance to take decisive action. “If the government and Ofcom agree with our conclusions, why stop short of adopting our recommendations?” she questioned. “The committee is not convinced by the argument that the OSA already covers generative AI. The technology is evolving far faster than the legislation, and more will clearly need to be done.”

She further emphasized that failure to address the monetization of harmful content leaves a significant regulatory gap. “Without addressing the advertising-based models that incentivize platforms to algorithmically amplify misinformation, how can we stop it?” Onwurah asked.

The committee’s concerns come amid growing evidence that online misinformation has real-world consequences. Onwurah issued a stark warning about the potential for future civil unrest, stating, “It is only a matter of time until the misinformation-fuelled 2024 summer riots are repeated. The government urgently needs to plug the gaps in the Online Safety Act before further harm occurs.”

The debate over online safety regulation reflects broader tensions between technological innovation, freedom of expression, and public safety. Technology companies have typically resisted additional regulatory oversight, arguing that excessive restrictions could hamper innovation and economic growth in the digital sector, which represents an increasingly important part of the UK economy.

However, public concern about the spread of false information online has grown significantly following several high-profile incidents where misinformation contributed to civil unrest, election interference, and public health risks during the COVID-19 pandemic.

The committee’s report and the government’s response come at a critical juncture for digital regulation in the UK. As generative AI technologies become more sophisticated and widely accessible, questions about accountability, transparency, and the adequacy of existing regulatory frameworks are likely to intensify.

For now, the gap between the committee’s recommendations and the government’s willingness to implement them suggests that the struggle to effectively regulate online harms remains unresolved, leaving users potentially exposed to algorithmic amplification of harmful content.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

7 Comments

  1. Jennifer Moore on

    This is a concerning development. The government needs to take a comprehensive and proactive approach to tackle misinformation head-on, especially as it relates to emerging technologies like generative AI. Inaction is not an option here.

  2. William Johnson on

    The rapid spread of misinformation, especially via AI-powered platforms, is a major threat to public trust and democratic discourse. I’m glad to see this issue getting high-level attention, but the government must act decisively to address it.

  3. This is a complex problem without easy solutions. Balancing online safety with free speech is tricky, but I agree that the government needs to take a more proactive approach, especially when it comes to rapidly spreading AI-generated content.

    • Isabella Johnson on

      Absolutely. The government can’t afford to be reactive – they need to get ahead of the curve on this. Strengthening the regulatory framework is crucial to protect the public.

  4. Misinformation is a serious issue that can have real-world consequences. It’s encouraging to see the government and regulators acknowledging the risks, but more action is clearly needed to address the challenges posed by emerging AI technologies.

  5. Lucas Williams on

    I’m curious to see what specific policy recommendations the MPs have put forward. Curbing misinformation is important, but any new laws or regulations will need to be carefully crafted to avoid unintended consequences.

    • That’s a good point. Any regulatory approach will require a delicate balance. I hope the government engages in thorough consultation with all stakeholders to find the right solutions.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.