Listen to the article

0:00
0:00

AI-Generated War Misinformation Floods Social Media for Profit

An unprecedented wave of AI-generated misinformation related to the US-Israel conflict with Iran is proliferating across social media platforms, with content creators exploiting advanced generative AI tools to earn revenue, according to experts who spoke with BBC Verify.

BBC Verify’s investigation uncovered numerous instances of AI-created videos and manipulated satellite images being widely shared online to support false or misleading claims about the ongoing conflict. These fabricated materials have collectively garnered hundreds of millions of views across various social media platforms.

“The scale is deeply concerning, and the current war has brought the issue into sharp focus,” said Timothy Graham, a digital media specialist at Queensland University of Technology. “What previously required professional video production teams can now be produced within minutes using AI tools. The barrier to creating convincing synthetic footage of conflict has effectively disappeared.”

The US and Israel began military operations against Iran on February 28, prompting retaliatory drone and missile attacks from Iran targeting Israel, Gulf countries, and US military installations across the region. As tensions escalated rapidly over the past week, millions turned to social media for updates on the unfolding situation.

In response to the surge in misleading content, social media platform X (formerly Twitter) announced a temporary suspension of monetization privileges for creators sharing unlabeled AI-generated videos of armed conflicts. Under X’s monetization program, eligible users receive payments when their posts generate significant engagement.

“It’s a significant indication that they understand this is a major issue,” noted Mahsa Alimardani, a researcher on Iran at the Oxford Internet Institute. Meanwhile, TikTok and Meta (parent company of Facebook and Instagram) did not respond to BBC Verify’s inquiries about implementing similar measures.

One particularly widespread example of misinformation identified in the investigation appears to show missiles striking Tel Aviv with audible explosions in the background. This AI-generated clip has appeared in over 300 separate posts and been shared tens of thousands of times across multiple platforms. Even more concerning, X’s AI chatbot Grok incorrectly authenticated the fabricated footage in several instances when users asked for verification.

Another fabricated video, which accumulated tens of millions of views, depicts Dubai’s Burj Khalifa skyscraper engulfed in flames while crowds rush toward the building. This false content spread during a period of heightened anxiety following reports of actual drone and missile strikes targeting the city.

“Videos like these undermine trust in verified information available online and make it far more difficult to document genuine evidence,” Alimardani explained.

The investigation also revealed a new dimension in the conflict: the circulation of AI-generated satellite imagery. While BBC Verify confirmed authentic videos showing Iranian strikes on the US Navy’s Fifth Fleet headquarters in Bahrain, a manipulated satellite image later shared by state-linked newspaper The Tehran Times falsely depicted severe destruction at the facility.

Analysis showed this fabricated image was likely derived from a genuine satellite photo taken in February 2023, with Google’s SynthID watermark detection system indicating it was modified using a Google AI tool. Tellingly, vehicles visible in both images appear in identical positions despite the supposed year-long gap between captures.

The tools enabling this misinformation explosion include Google’s video-generation platform Veo, OpenAI’s Sora model, the Chinese application Seedance, and X’s integrated Grok system.

“The number of tools now available to create highly realistic AI manipulations across different formats is unprecedented,” said Henry Ajder, a specialist in generative AI. “We have never seen these technologies so accessible, so simple to use and so inexpensive.”

Victoire Rio, executive director of technology policy non-profit What To Fix, noted that the production and distribution of AI-generated content can now be largely automated, contributing to its rapid proliferation online.

Most telling, X’s head of product recently claimed that approximately 99% of accounts sharing AI-generated war footage were attempting to “game monetization” by posting engagement-driven content to earn payments through the platform’s Creator Revenue Sharing program.

While X does not disclose participant numbers or payment structures, Graham estimates creators may earn between $8-12 per million verified user impressions after qualifying for the program by generating at least five million organic impressions within three months while maintaining a premium subscription.

“Once creators qualify, viral AI-generated content effectively becomes a money-making machine,” Graham explained. “It has created the ultimate misinformation enterprise.”

Experts conclude that despite social media companies’ efforts to improve moderation systems, the fundamental tension between engagement-driven monetization and information accuracy presents an intractable problem.

“The deeper problem is that monetisation driven by engagement and the distribution of accurate information are fundamentally at odds,” Graham said. “No platform has fully solved that conflict, and perhaps none ever will.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

18 Comments

  1. James Hernandez on

    This highlights the double-edged nature of advanced AI. While it can be a powerful tool, bad actors are clearly exploiting it to sow confusion and disinformation. Stronger safeguards and content moderation must be implemented to prevent the misuse of these technologies.

    • Agreed. The speed and scale at which this misinformation can be generated and distributed is alarming. Policymakers need to act quickly to stay ahead of the evolving threat.

  2. Linda Martin on

    The proliferation of AI-generated war footage is extremely concerning. It’s clear that content creators are prioritizing profits over truth, which could have devastating consequences. Rigorous fact-checking and detection methods are urgently needed.

    • William Brown on

      Absolutely. The public deserves accurate, reliable information, especially during times of conflict. Addressing this issue should be a top priority for technology companies and policymakers.

  3. The use of AI to create war misinformation is deeply concerning. It highlights the need for robust content verification and fact-checking to ensure the public has access to accurate, trustworthy information, especially during times of conflict.

    • Absolutely. The threat posed by AI-generated disinformation is only growing, and we must find effective ways to address it before it erodes public trust and causes real-world harm.

  4. It’s disturbing to see how AI is being used to spread war misinformation for profit. This undermines public trust and makes it harder to get accurate information during conflicts. Stricter regulations and enforcement are needed to curb this problem.

    • You’re right, this is a serious threat to democratic discourse. Addressing the financial incentives behind this misinformation is crucial to stemming its spread.

  5. Liam Hernandez on

    This is really worrying. The ease with which AI can now fabricate convincing footage of conflicts is deeply concerning. We need better ways to identify and combat this kind of misinformation before it can do real harm.

    • Mary X. Martin on

      I agree, this situation highlights the urgent need for improved content moderation and verification tools to detect AI-generated media. The potential for exploitation is alarming.

  6. Oliver J. Thompson on

    The proliferation of AI-generated war misinformation is extremely concerning. It undermines our ability to get accurate information and makes it harder to make informed decisions, especially during times of conflict. Urgent action is needed to address this threat.

    • Isabella Moore on

      You’re absolutely right. This situation highlights the pressing need for improved content moderation and verification tools to detect and remove this kind of fabricated media. Policymakers and tech companies must work together to find solutions.

  7. This is a troubling development. The speed and scale at which AI can generate misinformation is alarming. We need to find ways to combat this threat to our information ecosystem, before it causes real-world harm.

    • Amelia White on

      Agreed. Tackling the financial incentives behind this misinformation should be a key focus. Strict regulations and stronger enforcement are essential to curbing the spread of these fabricated materials.

  8. This is a deeply troubling development. The ease with which AI can now be used to create convincing yet false footage of conflicts is extremely concerning. We need robust safeguards and fact-checking mechanisms to protect the public from this kind of misinformation.

    • Amelia Moore on

      I agree, this is a serious threat to informed public discourse. The financial incentives driving the creation and spread of this content are unacceptable. Stronger regulations and enforcement are urgently needed.

  9. Olivia A. Jackson on

    This is a very worrying trend. The ease with which AI can be used to create convincing yet fabricated footage of conflicts is alarming. We need to find ways to better identify and counter this kind of misinformation before it can do serious damage.

    • Jennifer Lee on

      I agree. The financial incentives driving this misinformation are deeply problematic. Stronger regulations and enforcement are needed to hold platforms and content creators accountable and protect the public.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.