Listen to the article

0:00
0:00

Social Media Giants Face Sharp Criticism from UK MPs Over Online Harms

Representatives from Meta, X, TikTok, and Google faced intense scrutiny from British lawmakers on Tuesday, as the Science, Innovation and Technology Committee accused the tech giants of failing to address mounting online threats.

During the confrontational session on social media, misinformation and harmful algorithms, committee chair Chi Onwurah delivered a blunt assessment: “The basic fact is, all the work that you tell us you are doing on online harms and to make your platforms safe in this country is not working.”

The tech executives were questioned about their approaches to combating misinformation, AI-generated fake content, child safety concerns, transparency issues, and political bias. In each area, committee members found the platforms’ responses inadequate.

MP Martin Wrigley pointed to the worsening problem of misinformation, stating, “You are continuing to fan the embers of misinformation and disinformation, and sometimes you’ll send the fire brigade when that fire gets too big.” When asked about potential solutions, the tech representatives – despite some companies recently cutting fact-checking and safety teams – suggested artificial intelligence would play a key role.

Child safety emerged as a particularly contentious topic. Emily Darlington MP cited alarming statistics showing a 14% increase in AI-generated child sexual abuse images and a 260-fold rise in AI-generated abuse videos in 2023. She directly accused the platforms of failing to protect children.

Wifredo Fernandez, X’s director of global government affairs, claimed the platform has “zero tolerance” for such material and asserted that Grok AI had stopped creating nudified images. Rebecca Stimson from Meta stated their AI is trained not to produce nude images, while TikTok’s Alistair Law highlighted improving technologies to identify and block harmful content before publication.

However, MP Freddy Van Mierlo immediately challenged these assurances, noting he had found instructional videos on TikTok that very day explaining how to create non-consensual intimate images of women.

The committee also questioned why platforms seem eager to attract young users while claiming they generate minimal revenue from them. After uncomfortable pauses, Meta’s Stimson eventually acknowledged, “Clearly, it’s in our interest to attract users to our service.”

Political influence and bias were equally contentious. When questioned about platform owners’ political influence, X’s representative struggled to avoid mentioning Elon Musk by name – who has publicly endorsed far-right parties in the UK and Germany – claiming X is “politically agnostic” to audible skepticism from the committee.

Conservative MP George Freeman highlighted a persistent problem with harmful content removal, noting that a deepfake video falsely showing him joining Reform UK remained widely available despite removal requests. Google’s representative Zoe Darmé explained their content removal processes, to which Freeman bluntly responded, “It’s not working because it’s still on YouTube.”

The hearing also exposed issues with account management. MP Samantha Niblett described a constituent whose small business Facebook page with 32,000 followers was suddenly removed without explanation. Despite Niblett’s personal intervention through Meta contacts, the process remained frustratingly slow and opaque.

MP Wrigley suggested that social media platforms should be regulated similarly to traditional media, arguing that “algorithms and recommended content and your publication of content is curation, which, in my view, turns you into publishers.”

The parliamentary session occurs amid significant regulatory developments. The UK government is currently considering a potential ban on social media for under-16s, X faces investigation over its AI’s generation of non-consensual sexualized images, and Meta was recently fined $375 million by a US court for misleading users about the safety of its platforms for children.

As the session concluded, Onwurah delivered a final rebuke: “The committee feels quite strongly that there is a level of complacency in your responses to our questions and in the evidence that you are giving.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

11 Comments

  1. The UK parliamentary hearing lays bare the growing disconnect between tech giants and the lawmakers tasked with protecting citizens. Striking the right balance between innovation and responsibility will be critical going forward.

  2. William White on

    It’s disappointing but not surprising to hear that the tech giants’ efforts fall short according to the UK lawmakers. Tackling misinformation, abuse, and algorithmic bias requires a fundamental shift in platform design and priorities.

  3. Noah Williams on

    This showdown underscores the challenges of regulating fast-moving tech industries. Bridging the gap between lawmakers’ concerns and the industry’s responses will require nuanced policymaking and ongoing dialogue.

  4. While the tech companies may dispute the details, the core message from the MPs seems clear – more must be done to make social media platforms safer and more accountable. The public deserves better.

  5. The UK MPs seem justified in their criticisms. Social media companies have had ample time to get a handle on online harms, yet the problems persist. Meaningful reform is clearly overdue.

  6. The tech execs’ responses seem inadequate based on the MPs’ criticism. Stronger accountability and transparency around content moderation and platform design choices are clearly needed to protect users.

    • William White on

      I agree. These companies can’t keep passing the buck – they have a duty of care to their users and the public at large.

  7. It’s troubling to hear that the tech giants are still struggling to effectively combat serious online harms. Improving platform safety should be a top priority, not an afterthought.

    • Patricia Jackson on

      Absolutely. The scale of these issues demands a much more proactive and comprehensive response from the industry.

  8. This clash between lawmakers and tech leaders highlights the growing need for tougher regulation and oversight in the digital realm. More must be done to rein in the outsized influence of these platforms.

  9. Isabella Lopez on

    This is a concerning development. Big tech companies need to take more responsibility for the harms enabled on their platforms. Proactive measures to address misinformation, online abuse, and algorithmic biases are essential.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.