Listen to the article
Misinformation Surges Across Social Media Platforms During Israel-Hamas Conflict
As millions of internet users turned to social media on October 7 to follow Hamas’s attack on Israel, they encountered a digital landscape rife with misinformation and manipulated content, particularly on X (formerly Twitter).
Amid the chaos, fabricated content spread rapidly. A viral video supposedly showing a Hamas fighter shooting down an Israeli helicopter was actually footage from the video game Arma 3. Another widely shared clip claiming to show Israeli airstrikes in Gaza was actually fireworks following a soccer match in Algeria. Numerous accounts, including those posing as news agencies, misrepresented old imagery as current scenes from the conflict zone.
Despite journalists’ efforts to share verified reporting, users had to wade through conspiracy theories and spam to find reliable information. The situation was further complicated by Hamas reportedly exploiting the confusion to distribute graphic images on platforms like X and Telegram, following a pattern seen with other extremist organizations seeking to amplify their message during international crises.
While misinformation during global events isn’t new, the situation at X has deteriorated significantly since Elon Musk’s acquisition in late 2022. Under his leadership, the platform terminated many content moderation employees, dissolved its Trust and Safety Council, and reinstated accounts previously banned for hate speech. Musk also revamped the verification system in ways that make it easier for malicious actors to appear legitimate.
Further changes at X included removing labels from state-affiliated media accounts from countries like Iran, Russia, and China—many of which commented extensively on the Hamas-Israel situation. The platform also withdrew from the European Union’s voluntary Code of Practice on Disinformation, abandoning previous commitments to transparency and media literacy.
Other major platforms face similar challenges. Facebook, Instagram, TikTok, and YouTube officially ban Hamas-related content but struggle with real-time moderation during crises. These platforms typically rely on algorithms to flag harmful content, but such automated systems often fail to understand cultural contexts and nuances, particularly with non-English content.
The problem has worsened as companies like Meta and YouTube reduced their trust and safety teams through layoffs earlier this year. While Meta claimed to be employing Hebrew and Arabic content reviewers during the current conflict, previous reports revealed that 60 percent of Arabic-language content remained improperly moderated as recently as 2020. Whistleblower Frances Haugen previously disclosed that Meta allocated 87 percent of its misinformation resources to English content, despite English speakers comprising only 9 percent of its user base.
The business models of social media platforms compound these issues. Designed to maximize engagement, their algorithms often promote shocking or polarizing content that can go viral within minutes—regardless of accuracy. TikTok’s recommendation system can trap users in information bubbles, while Facebook’s algorithm changes from 2018 prioritize content from friends and family over verified news sources, inadvertently reinforcing existing beliefs and polarization.
This surge in misinformation coincides with social media becoming increasingly inhospitable to news organizations. Meta has begun blocking news content in Canada in response to legislation requiring platforms to pay for hosting news articles. Even as Threads, Meta’s text-based alternative to X, gained users following the weekend’s online chaos, executives reiterated they would not actively promote news articles on the platform.
The Hamas attack represents the first major test for new European regulations like the Digital Services Act (DSA), which took effect for large platforms in August. EU Commissioner Thierry Breton has demanded answers from X, Meta, and TikTok regarding their compliance with transparency and safety mandates. Platforms found violating these rules could face fines of up to 6 percent of their global annual revenue.
Meanwhile, the UK recently approved its Online Safety Bill, requiring platforms to proactively remove illegal or harmful content, including terrorist propaganda. However, both European and UK regulations struggle with addressing misinformation that doesn’t clearly violate laws or terms of service.
Even with advanced technology and unlimited resources, social media companies face difficult trade-offs between controlling misinformation and preserving free expression, particularly during fast-developing crises when the line between political speech and harmful content can blur.
As platforms continue to struggle with these challenges, traditional journalism—with its established fact-checking processes and editorial standards—remains a more reliable alternative for those seeking accurate information during global conflicts.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
It’s concerning to see social media platforms struggling to contain the spread of misinformation, especially during times of conflict. Extremist groups like Hamas seem to be exploiting the confusion to amplify their message. Careful fact-checking and reliable reporting are crucial to combat the proliferation of false content.
Agreed. The line between free speech and the need to moderate harmful content is a delicate balance. Platforms must do more to quickly identify and remove manipulated media and coordinated disinformation campaigns.
The prevalence of misinformation during the Israel-Hamas conflict is deeply troubling. It’s alarming how quickly fabricated videos and imagery can spread on social media, drowning out factual reporting. Rigorous enforcement of platform policies is needed to curb the influence of extremist groups.
Absolutely. The ability of bad actors to exploit these platforms for propaganda is a serious concern. Transparency around content moderation and closer collaboration with journalists could help restore trust in the information shared online.
This highlights the ongoing challenge of misinformation on social media, especially during geopolitical crises. The mix of manipulated content, conspiracy theories, and extremist messaging makes it difficult for users to find reliable information. Platforms must invest more in proactive detection and removal of such harmful content.
The spread of Hamas-related misinformation on social media is extremely worrying. Fabricated videos and imagery can easily sway public opinion and obscure the facts. Platforms need to strengthen their content moderation and fact-checking efforts to combat the influence of extremist groups during conflicts.
Agreed. The proliferation of misinformation during times of crisis is a serious issue that undermines informed discourse. Platforms must be more transparent about their policies and work closely with journalists to quickly identify and remove false or manipulated content.
It’s disturbing to see how social media platforms are struggling to contain the surge of misinformation related to the Israel-Hamas conflict. The ability of extremist groups to exploit these platforms to amplify their message is deeply concerning. Rigorous enforcement of platform policies and better collaboration with media fact-checkers are crucial to address this problem.