Listen to the article
The British government is facing harsh criticism for its decision to reject recommendations aimed at combating online misinformation, with critics arguing the move leaves the public vulnerable to harmful content that can fuel real-world violence.
Full Fact, a leading independent fact-checking organization, has condemned the government’s dismissal of proposals put forward by the Science, Technology, and Innovation Committee to address the growing crisis of digital misinformation. The committee’s recommendations came in response to the violent far-right anti-immigration riots that swept across the United Kingdom during the summer of 2024.
Those disturbances, which caused significant damage in multiple cities and resulted in numerous arrests, were widely believed to have been exacerbated by social media algorithms that amplified false information and inflammatory content. Digital platforms’ role in disseminating misinformation became a focal point for public concern in the riots’ aftermath, prompting calls for stronger regulatory oversight.
Among the committee’s key recommendations were provisions to expand Ofcom’s regulatory authority over digital misinformation, impose stricter obligations on social media companies to remove verified false information, and create new legislative frameworks specifically targeting generative AI platforms—which experts fear could dramatically accelerate the spread of synthetic misinformation.
However, the government has largely rejected these proposals, maintaining that the existing Online Safety Act already provides adequate protection against the most harmful instances of misinformation, particularly those affecting children. Government officials have expressed concerns that implementing the committee’s recommendations could potentially infringe upon freedom of expression—a position that has drawn sharp rebuke from misinformation experts.
Azzurra Moores, policy lead at Full Fact, expressed profound disappointment with the government’s stance. “By rejecting the SIT Committee’s recommendations on AI-generated content and the Online Safety Act, the government is leaving the public exposed to fast-moving false, harmful, and misleading information online,” Moores stated. “This is a missed opportunity to strengthen our defences which currently fail to address all but a fraction of the problem.”
The rejection comes at a time when misinformation challenges are becoming increasingly complex and dangerous. The summer riots demonstrated how online falsehoods can rapidly translate into physical violence, causing significant public safety concerns. Security analysts have warned that without robust countermeasures, similar incidents could recur with greater frequency and intensity.
Moores further criticized the government’s approach as “business-as-usual” and cautioned that such complacency risks “putting them on the wrong side of history.” She emphasized the urgent need for stronger legal frameworks and government-led initiatives to ensure digital platforms adequately address misinformation threats and that Ofcom is properly equipped to fulfill its regulatory responsibilities.
“Without better legal standing for misinformation and tangible, government-led action to ensure platforms step up to the threat and that Ofcom is fit for its duties, the public remains staggeringly vulnerable to future information crises that begin online but spill out into the streets,” Moores warned.
The government’s decision highlights the ongoing tension between regulating harmful content and preserving free expression in digital spaces. While tech companies have implemented various content moderation measures, critics argue these efforts remain insufficient without stronger regulatory backing.
The debate occurs against the backdrop of increasing global concern about misinformation’s impact on democratic processes, public health, and social cohesion. Several other nations, particularly in the European Union, have moved forward with more comprehensive regulatory frameworks to address digital misinformation, potentially leaving the UK behind in developing effective countermeasures against this evolving threat.
As generative AI technologies continue to advance and make the creation of convincing false content increasingly accessible, experts warn that the government’s reluctance to adopt more stringent measures could leave British society particularly vulnerable to future waves of dangerous misinformation.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
This is a complex issue with no easy solutions. While the government should take a stronger stance against digital misinformation, heavy-handed regulation could backfire and infringe on free speech. A balanced, collaborative approach involving tech companies, fact-checkers, and the public is needed.
I agree, a nuanced approach is required. Empowering independent fact-checkers and increasing digital literacy are good starting points, but legislation must be carefully crafted to uphold democratic principles.
While I understand the government’s hesitation to impose sweeping regulations, the consequences of inaction could be severe. Misinformation has real-world impacts, as seen in the 2024 riots. A more proactive, evidence-based approach is needed to protect the public.
The government’s dismissal of the committee’s recommendations is concerning. Misinformation and disinformation are eroding public trust and enabling the spread of extremism. A robust, multi-stakeholder response is essential to safeguard our democratic institutions.
I agree. The government must take this threat seriously and work collaboratively with tech companies, civil society, and the public to develop effective solutions. Inaction is not an option.
The rise of misinformation has become a major threat to public discourse and social cohesion. I hope the government will heed the committee’s recommendations and work closely with all stakeholders to address this pressing challenge effectively.
Tackling online misinformation requires a multi-pronged strategy. Enhancing transparency, accountability, and media literacy are all crucial components that deserve serious consideration.
This is a complex issue with no easy answers. While I’m sympathetic to the government’s concerns about overregulation, the consequences of inaction could be severe. I hope they’ll reconsider the committee’s recommendations and engage in a transparent, inclusive process to address this challenge.