Listen to the article

0:00
0:00

The AI-Powered Erosion of Truth in Modern Conflict Reporting

In a world increasingly dominated by artificial intelligence and ongoing armed conflicts, the line between reality and fabrication has become dangerously blurred. What theorist Jean Baudrillard once provocatively claimed about the Gulf War—that it “did not take place” in the sense that public perception was shaped by mediated television coverage rather than direct experience—has evolved into a far more complex crisis of information integrity.

Three decades later, Baudrillard’s observations appear prophetic. Where television once passively mediated events through biased interpretation, today’s landscape features generative AI and deepfakes actively manufacturing false events and fabricated evidence on an unprecedented scale. Most concerning is what researchers Shirin Anlen and Mahsa Alimardani have termed “forensic cosplay”—the use of technical visualizations and heatmaps to lend scientific authority to misinformation.

A case in point was the viral “ERFI” thread claiming a New York Times photograph from Tehran was AI-generated. Despite being based on analysis of an Instagram screenshot rather than the original image, the post garnered over 600,000 views. By the time fact-checkers could respond, the damage was done—a pattern that repeats with alarming frequency across global conflicts.

The crisis extends beyond the mere existence of propaganda, which has always been present during wartime. What’s fundamentally different now is the infrastructure through which this content circulates. Social media platforms, designed to maximize engagement rather than accuracy, serve as unwitting accomplices. Their algorithms, autoplay functions, and engagement-optimized feeds are structurally incapable of distinguishing verified reporting from deepfakes engineered to trigger emotional responses.

In fact, these algorithms often favor the most provocative content—the very deepfakes designed to elicit outrage and sharing. Platforms profit regardless of content authenticity, creating a perverse incentive structure that undermines truth in reporting.

An examination of over 30 articles published by Tech Policy Press between 2021 and 2026 reveals a consistent pattern across conflict zones from Gaza to Ukraine to Iran: information environments themselves have become battlegrounds, with social media platforms serving as either willing participants or negligent enablers.

The democratization of disinformation technology poses additional challenges. As Nusrat Farooq noted in July 2024, generative AI has eliminated barriers that once limited influence operations—technical sophistication and language skills are no longer prerequisites for creating convincing fake content. Meanwhile, research from Stanford Internet Observatory and Georgetown CSET confirms there is no technological panacea against AI-generated misinformation.

Despite these challenges, platforms have systematically dismantled their trust and safety infrastructure. Teams responsible for content moderation have been gutted, state media labels removed, and the very concept of content moderation has been politically reframed as censorship by those who benefit from information chaos.

The consequences manifest in what Prithvi Iyer, drawing from WITNESS’s September 2024 report, describes as dual dynamics: “plausible deniability,” where genuine evidence can be dismissed as AI-generated, and “plausible believability,” where synthetic content confirming existing biases is accepted without scrutiny. Together, these dynamics undermine the epistemic foundations necessary for democratic discourse.

India faces particular risks in this environment. The Bulletin of the Atomic Scientists has explicitly warned that deepfakes during India-Pakistan tensions could trigger “catastrophic misperception and miscalculation” between nuclear-armed states. When fabricated videos of military officials circulate to hundreds of thousands before debunking, the window for dangerous escalation narrows dramatically.

Recent events have shown how armed conflict provides cover for widespread censorship. During Operation Sindoor, over 1,400 URLs were blocked under Section 69A of the Information Technology Act, while preventive internet shutdowns were imposed across Jammu and Kashmir. In the past three weeks, even satirical posts about the prime minister have been removed from social media platforms.

Paradoxically, the state’s instinct to combat misinformation through blackouts often drives citizens toward the very rumor mills and unreliable sources they should avoid.

A meaningful response would require multiple approaches: establishing legal liability for platforms that algorithmically amplify synthetic content during conflicts; public investment in verification infrastructure and media literacy programs; and international legal frameworks that recognize information warfare as a matter of humanitarian concern.

What Baudrillard described as simulation replacing reality has evolved into something more insidious—a world where reality and simulation have become technically indistinguishable, mediated by platforms with no commercial incentive to help us tell them apart.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

22 Comments

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.