Listen to the article
In a significant rebuke to government inaction, Parliament’s Science, Innovation and Technology Committee has criticized the responses from the government and Ofcom regarding online misinformation regulation. The committee expressed disappointment that while both entities agreed with most of their findings, they rejected key recommendations aimed at protecting social media users from harmful content.
The parliamentary committee published these responses today following their July report, which concluded that the UK’s Online Safety Act (OSA) fails to effectively combat the viral spread of misinformation online. The report had urged the government to take stronger action against algorithmic amplification of false content.
At the heart of the disagreement is the regulation of artificial intelligence platforms. The committee had recommended that generative AI platforms should be subject to online safety legislation, bringing them in line with other online services that carry high risks of spreading illegal or harmful content. However, the government rejected this recommendation, claiming that AI-generated content is already adequately regulated under the existing OSA framework.
This position stands in stark contrast to Ofcom’s earlier testimony to the committee, where the regulator admitted the legal position of AI technology was “not entirely clear” and acknowledged the need for further industry consultation about where “more can be done.”
Dame Chi Onwurah MP, Chair of the Science, Innovation and Technology Committee, voiced her frustration with the responses: “In their responses to the committee, both the government and Ofcom agreed with most of our conclusions – so why have they stopped short of accepting our recommendations?”
The committee’s report had also highlighted a critical issue underlying misinformation: the monetization of harmful content driven by social media companies’ digital advertising models. While acknowledging this problem, the government declined to commit to immediate action, stating it would merely keep the matter “under review” rather than implementing the committee’s recommended changes to digital advertising regulations.
This approach has drawn sharp criticism from Dame Onwurah, who expressed skepticism about the government’s claim that the OSA already covers generative AI adequately. “The technology is developing at such a fast rate that more will clearly need to be done to tackle its effects on online misinformation,” she noted. “Additionally, without addressing the advertising-based business models that incentivise social media companies to algorithmically amplify misinformation, how can we stop it?”
The committee’s concerns come in the context of increasing real-world consequences of online misinformation. Dame Onwurah issued a stark warning, stating that “public safety is at risk, and it is only a matter of time until the misinformation-fuelled 2024 summer riots are repeated.”
The dispute highlights growing tensions between parliamentary oversight bodies and government regulators over how to address the rapidly evolving challenges posed by social media platforms and AI technologies. As digital misinformation continues to impact public discourse and safety, the committee’s push for stronger regulatory frameworks reflects mounting pressure for more decisive government action.
Industry observers note that the UK’s approach to regulating online platforms and AI will likely influence international standards, making these disagreements particularly significant in the global context of digital governance. The committee’s critique suggests that despite the landmark Online Safety Act, significant gaps remain in the UK’s ability to protect citizens from the harmful effects of algorithmically amplified misinformation.
As technology continues to advance, the adequacy of current regulatory frameworks remains a critical point of contention between lawmakers seeking stronger protections and a government reluctant to impose additional restrictions on the rapidly evolving tech sector.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments
The viral spread of misinformation is a serious threat that requires a proactive and well-crafted regulatory response. While I respect the government’s position, I’m inclined to side with the committee’s view that generative AI platforms warrant specific oversight under the Online Safety Act.
I agree, the stakes are too high to leave emerging AI technologies unregulated when it comes to misinformation risks. Policymakers must act decisively but also thoughtfully to find the right regulatory balance.
As an AI enthusiast, I’m curious to learn more about the government’s reasoning for rejecting the committee’s recommendation. Transparency and a collaborative approach between policymakers, experts and industry will be key to developing effective regulations in this rapidly evolving space.
This is a nuanced issue without easy answers. I appreciate the government’s stance, but the committee’s concerns about AI-generated content seem well-founded. Ongoing dialogue and a flexible, evidence-based approach will be crucial as these technologies continue to evolve.
This is a concerning issue. Misinformation can spread rapidly online and pose real risks to public health and safety. Stronger regulation of AI platforms to combat this seems prudent, though the government’s rationale for rejecting the committee’s recommendation merits further examination.
I agree, the viral spread of misinformation is a complex challenge that requires a nuanced and proactive approach from policymakers. Oversight of AI content is crucial as these technologies become more advanced and influential.
The government’s stance seems shortsighted. Generative AI is a powerful and fast-evolving technology – bringing it under the Online Safety Act framework makes sense to ensure proper safeguards are in place. Protecting social media users from harmful content should be a top priority.
Exactly. With the rapid development of AI, the potential for misuse and spread of misinformation is substantial. Proactive regulation is needed to mitigate these risks and build public trust in these emerging technologies.
This is a complex issue with valid arguments on both sides. Regulating AI platforms to combat misinformation is crucial, but the government may have reasonable concerns about the scope and implementation. An open and evidence-based dialogue is needed to find the right balance.
While I appreciate the government’s perspective, I’m not fully convinced their existing OSA framework is sufficient to handle the unique challenges posed by AI-generated content. This warrants further scrutiny and debate to find the right regulatory approach.