Listen to the article

0:00
0:00

Social media platforms are facing increased scrutiny over the spread of misinformation as the Israel-Hamas conflict unfolds, with experts warning that unchecked false information threatens the platforms’ role as spaces for authentic grassroots reporting during crises.

Just over a decade ago, social media was widely celebrated for its role in democratic uprisings across the Middle East, North Africa, and beyond. These platforms offered unprecedented opportunities for marginalized voices to be heard globally, often for the first time. However, in the aftermath of Hamas’ attack on southern Israel and Israel’s subsequent military campaign in Gaza, a flood of misinformation has overwhelmed social media channels.

X (formerly Twitter) has become particularly problematic since CEO Elon Musk dismantled much of its content moderation infrastructure. According to recent reports, the platform’s Community Notes feature—designed to fact-check misleading content—now operates so slowly that corrections to known disinformation are being delayed for days, allowing false narratives to spread unchecked.

The problem extends beyond X. Both TikTok and Meta’s platforms have implemented what critics describe as inadequate monitoring strategies as the conflict intensifies.

“The entwining of authentic details with manipulated information can undermine legitimately newsworthy events,” said Bellingcat, an investigative organization founded during the Syrian war. In one case, researchers found that while a widely shared video was inauthentic, the text accompanying it contained accurate and significant information.

The consequences of unchecked misinformation have already spilled into mainstream discourse. Allegations that Hamas “decapitated babies and toddlers” spread rapidly across social media before verification, appearing on the front pages of five major UK newspapers. President Biden initially claimed to have seen “confirmed pictures of terrorists beheading children,” a statement the White House later walked back. Israeli officials have since stated they cannot confirm such reports.

Similarly, allegations of systematic rape during the Hamas attack circulated widely across platforms and were repeated by political figures, celebrities, and media outlets, despite the Israeli Defense Force stating it “does not yet have any evidence of rape having occurred during Saturday’s attack or its aftermath.”

Hamas, meanwhile, appears poised to exploit gaps in platform moderation. A spokesperson for the group told the New York Times they intend to continue using social media channels, and have “vowed to continue broadcasting executions,” though they did not specify which platforms they would use.

Digital rights experts have outlined steps social media companies should take to address these concerns. These include implementing robust trust and safety mechanisms proportionate to their user base, ensuring consistent and transparent content moderation practices across all markets and languages, employing independent fact-checking, encouraging users to verify information before sharing, and subjecting moderation systems to independent audits.

For platforms operating in Europe, there’s additional pressure from the EU’s Digital Services Act. European Commissioner Thierry Breton has already issued warnings to TikTok, Meta, and X, urging them to prevent the spread of disinformation and illegal content related to the conflict.

However, digital rights advocates caution against politicizing content moderation or mandating the swift removal of content that isn’t necessarily illegal, warning that such approaches could have chilling effects on legitimate speech.

“We are all vulnerable to believing and passing on misinformation,” noted one digital rights expert. “Ascertaining accuracy during conflicts is challenging when communication channels are compromised and all sides have interests in circulating propaganda. But these challenges don’t excuse platforms from implementing effective moderation systems.”

Without adequate safeguards and robust trust and safety mechanisms, experts warn this will not be the last time unverified allegations have serious real-world implications—both online and offline.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments

  1. The dismantling of content moderation infrastructure at X (Twitter) is particularly concerning. Platforms must maintain robust systems to combat the spread of false narratives, especially during sensitive geopolitical events.

    • Michael Miller on

      Agreed, the slowdown of the fact-checking process on X is alarming and allows misinformation to take hold. Platforms need to prioritize this issue.

  2. I’m curious to see what specific steps social media companies will take to improve their response to misinformation during conflicts. Fact-checking and community-driven corrections seem like a good starting point.

    • Amelia Thompson on

      That’s a good point. Empowering users to identify and correct misinformation could be an effective approach, if implemented thoughtfully.

  3. William Johnson on

    The spread of misinformation on social media during conflicts is a serious issue. Platforms need robust fact-checking and content moderation to curb the proliferation of false narratives that can inflame tensions.

    • Jennifer Moore on

      Agreed. Unchecked misinformation undermines the positive role social media can play in authentic grassroots reporting during crises.

  4. Linda Rodriguez on

    It’s concerning to hear that TikTok and Meta’s platforms have also been criticized for their misinformation response during the Israel-Hamas conflict. This issue seems pervasive across social media.

    • Absolutely. The problem extends beyond any single platform, requiring a coordinated industry-wide effort to develop robust and transparent solutions.

  5. While social media has empowered marginalized voices, the current challenges with misinformation are a sobering reminder of the need for responsible platform governance. A balanced approach is required.

    • Well said. Platforms must find a way to preserve the positive aspects of social media while effectively addressing the very real problem of misinformation.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.