Listen to the article

0:00
0:00

Artificial Intelligence Emerges as Powerful Weapon in Gaza Conflict

As the Gaza Strip faces unprecedented devastation from ongoing Israeli military operations, a parallel battle rages in the digital realm. Since October 7, when Hamas launched its “Al-Aqsa Flood” operation against Israeli settlements near Gaza, misinformation has flooded social media platforms and global news cycles, creating what experts now describe as a sophisticated information war.

The conflict has become a testing ground for weaponized artificial intelligence, with the Israeli government and its supporters deploying advanced technologies to shape international narratives about the conflict. This digital battlefront represents a significant evolution in modern warfare, where perception management has become as crucial as traditional military strategies.

“This marks the first time we’ve seen such coordinated deployment of AI for propaganda purposes during an active conflict,” said a spokesperson from a prominent digital rights organization who requested anonymity due to security concerns. “The scale is unprecedented.”

In late May, Meta announced the removal of hundreds of fake accounts linked to STOIC, a Tel Aviv-based company. According to Meta’s investigation, these AI-driven accounts systematically amplified Israeli propaganda and disseminated false claims, particularly targeting Arabic-speaking audiences. Just a day later, OpenAI banned another group of accounts operated by the same company that had been impersonating Jewish students and African American citizens to create an illusion of diverse support for Israeli positions.

The influence of misinformation became apparent early in the conflict when U.S. President Joe Biden repeated unverified claims about Hamas fighters beheading 40 Israeli infants—allegations initially made by Israeli Prime Minister Benjamin Netanyahu despite a lack of evidence. This narrative significantly influenced international public opinion despite the absence of verification.

Visual manipulation has proven particularly effective in this information war. The Palestinian Observatory for Fact-Checking and Media Literacy (Tahaqaq) has documented numerous instances of AI-generated imagery designed to elicit emotional responses. In one notable case, Israeli social media accounts circulated an image supposedly showing an Israeli soldier rescuing twin infants from Gaza. Technical analysis later revealed the image was AI-generated, with telltale signs including the soldier appearing to have three hands.

Another striking example involved a fabricated video that appeared to show American model Bella Hadid condemning the October 7 attack and expressing support for Israel. Fact-checkers discovered the footage had been taken from a 2016 awareness event, with AI technology used to clone Hadid’s voice and manipulate the content.

“Visual media plays an exceptionally powerful communicative role that often surpasses the impact of written words,” explained a digital forensics expert at Tahaqaq. “The increasing sophistication of these manipulations poses serious challenges to fact-checkers who lack adequate tools to consistently distinguish between genuine and manipulated content.”

The Palestinian side has not been immune to participating in this information ecosystem. Tahaqaq has identified what it calls “fake sympathy”—cases where Palestinian supporters unknowingly share AI-generated content that appears to support their narrative. One example showed a purported Israeli settlement camp with displaced settlers, which technical analysis confirmed was AI-generated.

Israeli media has exploited these instances to cast doubt on Palestinian claims about suffering in Gaza through campaigns labeled “Pallywood” and “Gazawood.” However, experts note a significant difference in scale and organization: Palestinian misinformation tends to be sporadic and uncoordinated, while Israeli efforts appear more systematic and institutionally backed.

The information warfare surrounding Gaza has reinforced global concerns about AI governance. While these technologies have documented genuine atrocities and fueled international solidarity movements, they have simultaneously enabled sophisticated manipulation campaigns that distort public understanding of the conflict.

“We’re witnessing a profound ethical challenge in real-time,” said a prominent AI ethics researcher. “The same technologies that can illuminate truth can be weaponized to obscure it. Without robust safeguards and verification methods, the information landscape becomes increasingly treacherous.”

As the conflict continues, the consequences of this digital battlefield remain uncertain. What is clear, however, is that technological asymmetry has given the Israeli side significant advantages in shaping international perceptions, highlighting the urgent need for more effective fact-checking methodologies and media literacy in an age where seeing can no longer be equated with believing.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

13 Comments

  1. Interesting to see how AI is being weaponized for propaganda purposes during conflicts. It’s concerning to think about the scale and coordination behind these efforts to shape narratives. We’ll need increasingly sophisticated fact-checking and verification methods to combat the spread of misinformation.

    • Noah P. Hernandez on

      You’re right, the use of AI for propaganda is a worrying trend. Fact-checkers and digital rights groups will need to stay vigilant and develop new tools to identify and counter these manipulative tactics.

  2. The use of AI for propaganda purposes during the Gaza conflict is a disturbing development. It underscores the importance of media literacy and critical thinking in an era of increasingly sophisticated misinformation campaigns.

  3. Oliver Jackson on

    This is a concerning example of how AI can be weaponized for propaganda and information warfare. The scale and coordination behind these efforts to shape narratives is truly alarming. Fact-checkers and digital rights groups will face an uphill battle in countering these tactics.

    • You’re absolutely right. Maintaining truth and accuracy in reporting will be an ongoing challenge as these AI-powered propaganda tactics continue to evolve. Fact-checkers and media organizations will need to stay vigilant and develop new tools to identify and debunk manipulated information.

  4. Isabella O. Jackson on

    This is a troubling example of how AI can be used as a powerful tool for propaganda and misinformation, especially in the context of an ongoing conflict. Maintaining journalistic integrity and public trust will be critical in the face of these emerging challenges.

    • I agree, the article highlights the need for robust fact-checking and verification processes to combat the spread of manipulated information. Fact-checkers will need to stay one step ahead of the AI-powered propaganda efforts.

  5. Liam E. Garcia on

    The article highlights the growing role of AI in modern warfare, where perception management is as important as traditional military strategies. It’s a complex issue with significant implications for media and global information flows.

    • Absolutely, the blending of digital and physical warfare tactics is a concerning development. Maintaining truth and accuracy in reporting will be an ongoing challenge as these technologies continue to evolve.

  6. Olivia J. Taylor on

    Weaponized AI for propaganda purposes during an active conflict is a concerning revelation. The scale and coordination behind these efforts to shape narratives is alarming. Fact-checkers and digital rights groups will have their work cut out for them.

  7. Robert Thompson on

    The article highlights the growing threat of AI-powered propaganda and misinformation, especially in the context of active conflicts. It’s a complex issue that will require a multi-pronged approach to combat, involving fact-checkers, digital rights groups, and media organizations.

  8. Weaponized AI for propaganda purposes is a concerning development, as the article outlines. The scale and coordination behind these efforts to shape narratives during the Gaza conflict is alarming. Maintaining journalistic integrity and public trust will be critical in the face of these emerging challenges.

    • I agree, the use of AI for propaganda is a worrying trend that will require a strong response from fact-checkers and digital rights advocates. Developing new tools and strategies to identify and counter these manipulative tactics will be crucial going forward.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.