Listen to the article
Parliamentary Committee Launches Inquiry into Social Media Algorithms and Harmful Content Following UK Riots
A new investigation into how social media algorithms and artificial intelligence contribute to the spread of harmful content has been launched by the UK Parliament’s Science, Innovation and Technology Committee. The inquiry comes in direct response to the violent anti-immigration demonstrations that swept across several UK cities during July and August 2024.
The riots, which targeted mosques and accommodation housing asylum seekers, were fueled in part by false information circulating on social media platforms regarding the killing of three children in Southport. This marks the first major investigation by the newly appointed Commons Committee.
Committee Chair Chi Onwurah MP emphasized the urgency of the inquiry, stating, “The violence we saw on UK streets this summer has shown the dangerous real-world impact of spreading misinformation and disinformation across social media. We shouldn’t accept the spread of false and harmful content as part and parcel of using social media.”
The probe will examine multiple aspects of how digital platforms operate, including the business models that potentially incentivize the spread of misleading content. Investigators will specifically analyze how ranking algorithms used by social media companies and search engines may have contributed to the summer riots by amplifying false narratives.
“This is an important opportunity to investigate to what extent social media companies and search engines encourage the spread of harmful and false content online,” Onwurah added. “We’ll examine how these companies use algorithms to rank content, and whether their business models encourage the spread of content that can mislead and harm us.”
The committee has established a comprehensive scope for its investigation. Key areas of focus include the algorithmic systems that determine what content receives prominence on platforms, how generative AI and large language models contribute to creating and spreading misinformation, and the direct role these technologies played in the summer riots.
The inquiry will also evaluate the effectiveness of current and proposed UK regulations, including the Online Safety Act, which aims to impose new responsibilities on technology platforms. Lawmakers will consider whether additional measures are needed to protect the public while maintaining appropriate freedoms of expression.
The role of regulatory bodies will be scrutinized as well. The committee plans to assess the responsibilities and effectiveness of Ofcom (the UK’s communications regulator) and the National Security Online Information Team in preventing the spread of harmful content.
This investigation represents a significant step in the UK government’s approach to digital regulation, marking a shift toward more direct accountability for technology platforms. It follows growing international concern about the societal impacts of algorithmic systems that optimize for user engagement rather than information accuracy or social wellbeing.
The committee has issued a call for written submissions addressing their terms of reference, with a deadline of December 18. These submissions are expected to come from a wide range of stakeholders, including technology companies, academic experts, civil society organizations, and potentially individuals affected by the summer riots.
The inquiry comes amid a global reckoning with the power of social media platforms and their influence on public discourse and social stability. Similar investigations have been conducted in the European Union and United States following incidents where online misinformation contributed to real-world harm.
The committee’s findings could potentially inform future legislation and regulatory approaches not just in the UK but internationally, as governments worldwide grapple with similar challenges at the intersection of technology, information integrity, and public safety.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
The UK Parliament’s decision to investigate the role of social media algorithms in the proliferation of harmful content is a welcome and necessary step. Unchecked, these systems can have devastating real-world impacts, as seen in the recent riots.
This is an important and timely inquiry. The detrimental impact of social media-fueled misinformation campaigns cannot be overstated. I hope the investigation leads to meaningful reforms and better safeguards against the exploitation of these powerful algorithmic tools.
This inquiry is sorely needed. Social media algorithms have enabled the rapid spread of misinformation, which can have devastating real-world consequences as we saw with the UK riots. Examining how these systems contribute to the proliferation of harmful content is crucial.
Examining the link between social media algorithms and the spread of harmful content is a critical endeavor. The UK riots are a stark reminder of the real-world dangers posed by the unchecked proliferation of misinformation online.
Absolutely. Platforms must be compelled to take greater responsibility for the algorithms they deploy and the consequences they enable. This inquiry could set an important precedent for stronger regulation in this space.
I’m glad to see the UK Parliament taking this issue seriously. The ability of false information to incite violence is truly alarming. Comprehensive investigation into the role of social media algorithms is a necessary step to address this growing problem.
Agreed. Platforms must be held accountable for the harms caused by the systems they’ve built. Rigorous oversight and reform are essential to mitigate the risks of algorithmic amplification of misinformation.
This inquiry is long overdue. The ability of social media algorithms to amplify and spread misinformation is a serious threat to public safety and social cohesion. I hope the committee’s findings lead to concrete policy changes to address this issue.