Listen to the article

0:00
0:00

In a digital landscape increasingly flooded with manipulated content, Technology Secretary Liz Kendall has expressed deep concern about the spread of misinformation related to the Iran conflict, acknowledging that the government must explore additional measures to combat false information during crisis situations.

“I’m deeply concerned about misinformation and disinformation being spread online. It’s something that I know MPs on all sides of the house are concerned about too,” Kendall told The Mirror. “We need to look very closely at what more we can do, particularly during crisis moments to make sure that people are getting the correct information and not seeing that misinformation spread online.”

Sources familiar with the matter indicate that cross-governmental efforts are already underway to address the issue as social media platforms struggle to contain a wave of synthetic and manipulated content related to the conflict.

Digital media expert Timothy Graham from Queensland University of Technology described the scale of Iran war-related misinformation as “truly alarming,” pointing to a wide spectrum of deceptive content circulating online.

“What we’re seeing is really a full spectrum of synthetic and manipulated content, and that breadth is itself part of the story,” Graham explained. “At one end you have AI-generated video – fabricated missile strikes, fake aerial footage of explosions, simulated drone attack sequences – that look increasingly indistinguishable from real conflict footage.”

He added that even low-tech manipulation methods are proving effective, including “repurposed footage from other conflicts presented as current, fabricated screenshots of official statements, and synthetic satellite imagery used to make false territorial claims.”

Graham specifically criticized Elon Musk’s X (formerly Twitter) platform, arguing that its architecture inherently rewards emotionally provocative, shareable content regardless of accuracy. “Posts depicting fake missile strikes that reach eight million views in hours are not an anomaly or a glitch in the system. They are the system working precisely as designed,” he noted.

Technical limitations exacerbate the problem, with X’s community notes system—designed to flag misleading content—typically taking 15-24 hours to activate, long after viral misinformation has already peaked, which Graham says usually happens within four hours. Additionally, the platform’s revenue-sharing program creates financial incentives for high-engagement content regardless of its veracity.

X has recently announced it will ban users from monetizing content if they repeatedly post AI-generated war videos without proper labeling, but experts question whether this measure goes far enough.

Mark Frankel, head of public affairs at fact-checking organization Full Fact, attributed the unprecedented scale of distortion partly to advances in artificial intelligence technology since the Ukraine conflict began.

“As we have entered into a space where we’re effectively living in the age of AI, we should expect to see a massive uptick in synthetic content online,” Frankel said. “The likelihood is you’re going to see a lot more false and manipulated stuff before you get to the reliable stuff. So I think that’s genuinely a problem.”

Frankel noted that while some users spread false information for profit, coordinated disinformation campaigns may also originate from foreign actors seeking political advantage. “It might be you’ve got a bot farm with a particular bent to provoke a sense that one side is on top or the other side is on top,” he explained. “So it could, for example, be inspired by Russians or Iranians who are looking for an advantage in the fight.”

Labour MP Chi Onwurah, who chairs the Commons’ Science, Innovation and Technology committee, called for stronger regulation, arguing that social media companies cannot be trusted to “mark their own homework.”

“Their internal data is secret and they self-select data they provide to the government,” Onwurah said. “Without independent access to proper data, we can’t know how much misleading content is circulating, how much stems from foreign interference and whether moderation systems are working as claimed.”

Onwurah urged the government to introduce legislation regulating AI platforms and create requirements for risk assessments and reporting on content that, while legal, may still cause harm. “These steps are crucial to create a safer online world for everyone,” she concluded.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. This is an important issue that goes beyond just the Iran conflict. Misinformation can have serious consequences, especially when it comes to matters of national security. Appreciate the government’s commitment to finding solutions.

  2. Isabella Jones on

    While the spread of misinformation is concerning, I’m glad to see the government taking steps to address it. Curious to learn more about the specific measures being considered to ensure the public has access to accurate information during crises.

  3. It’s good to see the government acknowledging the need to do more to combat misinformation online, particularly during crises. Curious to hear more details on the cross-governmental efforts underway to address this growing problem.

    • Elizabeth Thomas on

      Agreed, this is a complex challenge that requires a coordinated, multi-faceted approach. I hope the government’s efforts lead to more effective ways to quickly identify and counter false narratives in the future.

  4. William White on

    Curious to see what additional measures the UK government plans to explore in addressing misinformation during crisis situations. Transparency and collaboration with platforms will be essential to curbing the spread of false narratives.

  5. The scale of Iran war-related misinformation is truly alarming. Social media platforms need to take stronger action to contain the wave of synthetic and manipulated content. Fact-checking and rapid response will be key to countering this threat.

    • Agreed. Governments and tech companies must work together to quickly identify and remove false claims. Proactive steps are needed to ensure the public has access to reliable information.

  6. Concerning to see the spread of misinformation around the Iran conflict. It’s crucial that the public has access to accurate, verified information during crisis situations. Glad the government is exploring ways to combat false claims circulating online.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.