Listen to the article

0:00
0:00

Tech giants face parliamentary scrutiny over online misinformation and harmful algorithms

Google, TikTok, Meta, and X will appear before the Science, Innovation and Technology Committee as part of an ongoing inquiry into misinformation and harmful algorithms on digital platforms. The hearings, scheduled for both morning and afternoon sessions, will examine how these companies handle misleading content and the potential role their algorithms play in spreading harmful material online.

In the morning session beginning at 9:45am, Amanda Storey, Managing Director for Trust & Safety at Google EMEA, will field questions from the cross-party committee. MPs are expected to focus on Google’s methods for preventing misleading and harmful content from appearing in search results, particularly as the company continues to integrate generative AI features into its search products.

The scrutiny comes at a pivotal time for Google, which has been rapidly expanding its AI-infused search capabilities amid growing competition from emerging AI platforms. Committee members will likely probe the effectiveness of Google’s content moderation systems and the potential for AI to both combat and inadvertently amplify misinformation.

Questions may also address the often-opaque digital advertising market, where Google remains a dominant player, and how the company is adapting to the requirements of the UK’s recently enacted Online Safety Act – legislation designed to make tech companies more accountable for harmful content on their platforms.

The afternoon session, commencing at 2:30pm, will bring together representatives from three major social media platforms: Chris Yiu, Director of Public Policy for Northern Europe at Meta; Alistair Law, Director of Public Policy and Government Affairs for UK and Ireland at TikTok; and Wifredo Fernández, Senior Director for Government Affairs at X (formerly Twitter).

This panel is expected to face intensive questioning about their algorithmic systems and whether these technologies contribute to the amplification of misleading content. Social media algorithms, which determine what content users see in their feeds, have faced increasing criticism for potentially creating echo chambers and promoting sensationalist material that drives engagement but may contain misinformation.

The timing of these hearings is significant, coming amid growing global concerns about the spread of misinformation on social platforms during election cycles. With the UK general election on the horizon and major elections taking place in the United States and numerous other countries this year, the role of tech platforms in information dissemination has never been more relevant.

MPs are also likely to examine how these platforms are handling the new challenges posed by generative AI tools, which can create convincing but potentially misleading content at scale. The integration of such technologies into social media environments raises novel questions about content authenticity and the responsibilities of platform operators.

The committee’s inquiry occurs against the backdrop of evolving regulatory frameworks worldwide. The UK’s Online Safety Act represents one of the most comprehensive attempts to regulate online platforms, imposing new duties on tech companies to protect users from harmful content, with particular emphasis on protecting children and vulnerable groups.

Prior to the main evidence sessions, the committee will briefly hear from Olusola Idowu, CEO of HexisLab, as part of its showcase of UK innovators. This segment begins at 9:30am and represents the committee’s ongoing effort to highlight British innovation in the technology sector.

The hearings are expected to provide valuable insights into how major technology companies are addressing the complex challenges of content moderation at scale, the role of algorithms in shaping online information ecosystems, and the industry’s response to increasing regulatory pressure.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

9 Comments

  1. Elizabeth Rodriguez on

    It’s good to see these tech giants facing tough questions from the committee. Transparency and accountability are crucial when it comes to controlling misinformation and protecting users online.

  2. Michael Z. Johnson on

    Glad to see X (formerly Twitter) facing scrutiny as well. The spread of misinformation on that platform has been a major problem for years.

  3. TikTok’s content moderation practices have come under a lot of scrutiny lately. I hope the committee can get some clear answers on how the platform plans to tackle misinformation.

    • William O. Brown on

      Absolutely. With TikTok’s huge user base, especially among younger audiences, the potential for harm from misinformation is very concerning.

  4. Jennifer Jackson on

    It’s important that these tech giants are held accountable for the real-world impacts of their platforms and algorithms. I hope the committee can get to the bottom of these issues.

  5. As someone who uses these platforms regularly, I’m glad to see the government taking a closer look at how they operate. Transparency is key to building public trust.

  6. I’m curious to hear how Google plans to ensure its expanding AI search features don’t exacerbate the spread of harmful content. Robust content moderation will be critical.

    • Isabella M. Taylor on

      Agreed. The integration of generative AI into search raises significant challenges that need to be carefully addressed.

  7. Patricia P. Williams on

    Curious to see if Meta will have any new strategies to share for combating the spread of misinformation on Facebook and Instagram. Their track record has been concerning so far.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.