Listen to the article
In the rapidly evolving landscape of modern warfare, the battle for hearts and minds now unfolds in real-time across digital platforms, rivaling the importance of physical confrontations. Recent geopolitical tensions involving the United States, Israel, Iran, and the ongoing Russia-Ukraine conflict have highlighted how information has become a critical weapon in contemporary conflicts.
Social media platforms have emerged as the primary battlegrounds where competing narratives clash for dominance. Within minutes of a military engagement, news feeds, messaging apps, and online forums overflow with dramatic footage, confident analysis, and sensational claims—much of it unreliable or entirely fabricated through artificial intelligence.
While propaganda has always accompanied warfare, what distinguishes today’s information battles is the unprecedented speed and reach of digital communication. Narratives can spread globally in minutes, making corrections nearly impossible once a story gains momentum online.
Human psychology plays a central role in this phenomenon. Confirmation bias—our natural tendency to accept information that aligns with existing beliefs—makes people susceptible to accepting unverified claims that support their worldview. During conflicts, this tendency intensifies as individuals readily circulate stories portraying their preferred side positively while demonizing opponents, regardless of factual accuracy.
Emotional content serves as a powerful accelerant in this ecosystem. Posts triggering anger, fear, outrage, or moral indignation spread significantly faster than carefully verified reporting. Social media algorithms, designed to prioritize engagement, amplify emotionally charged content, ensuring that the most provocative messages—not necessarily the most accurate—receive maximum visibility.
The “illusory truth effect” compounds these challenges. When individuals encounter the same claim repeatedly across various sources, it begins to feel credible simply through familiarity. Coordinated networks of AI-powered or human-operated accounts can exploit this psychological vulnerability by simultaneously repeating identical narratives, creating an artificial consensus.
Group identity further complicates information assessment during conflicts. People often evaluate news not solely on evidence but through the lens of social belonging. Rejecting a widely accepted narrative within one’s political or ideological community can feel like betraying the group itself, leading many to defend unverified claims out of loyalty rather than conviction.
The digital environment has also democratized expertise, sometimes problematically. During conflicts, social media quickly fills with self-appointed analysts who confidently discuss military strategy and operations. A confident tone, some specialized terminology, and a few maps or satellite images can create an illusion of authority, though few of these commentators possess genuine insight into unfolding events.
Artificial intelligence has dramatically accelerated these trends. Modern AI tools enable the rapid generation of convincing text, images, audio, and video content with minimal technical expertise. This technological shift has profound implications for propaganda, allowing disinformation to be produced at unprecedented scale and making deception increasingly convincing.
A recent example involved fake photos purportedly showing captured U.S. Delta Force soldiers in Iran. Though trained observers could identify the AI-generated nature of these images, they nevertheless spread widely across social media platforms. Similarly, footage from realistic video games like Arma 3 has been misrepresented as authentic battlefield recordings, including a viral video supposedly showing a U.S. aircraft carrier in flames.
For average viewers, distinguishing between authentic material and manipulated or synthetic content becomes increasingly challenging. As AI capabilities advance, even genuine evidence can be dismissed as potential fabrication, blurring the distinction between factual reporting and strategic narratives.
It’s worth noting that wartime propaganda often targets not just adversaries but also domestic audiences and neutral observers. Creating confusion, doubt, and polarization can serve strategic objectives—when contradictory narratives proliferate, public uncertainty often leads to inaction rather than engagement.
South Africa has not been immune to these dynamics. False claims and misleading content regularly circulate through popular messaging platforms and social networks. Even after debunking, such misinformation continues spreading as people share it without verification, seeking both informational empowerment and social belonging.
The fundamental vulnerability in today’s information environment ultimately lies not in technology but in human behavior. Social media didn’t invent propaganda—it simply created its most efficient delivery system yet. Until individuals adopt more skeptical information consumption habits and verify content before sharing, the information battlefield will remain clouded with rumors, exaggerations, and deliberate deception.
In modern conflicts, the struggle for truth has become as strategically vital as the struggle for territory.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


7 Comments
The role of social media platforms in the spread of propaganda is troubling. They need to take more responsibility for the content on their sites and develop better tools to detect and limit the reach of false or misleading information.
Absolutely. Platforms have a duty of care to their users and the broader public. Stronger content moderation and algorithmic transparency are critical first steps.
As someone who follows the mining and metals sector, I worry about how this information warfare could impact commodity markets and investment decisions. Accurate, unbiased reporting will be essential to navigating these complex geopolitical dynamics.
Fascinating how modern warfare has evolved to include an information battlefield online. The speed and reach of digital propaganda is truly staggering. I’m curious to see how governments and platforms adapt to mitigate the spread of misinformation and fabricated narratives.
This is a concerning trend. The ability of AI to generate convincing yet false content in real-time is a real challenge for maintaining truth and accountability. We need stronger safeguards and transparency around information sources to combat these manipulative tactics.
Agreed. Fact-checking and media literacy will be crucial going forward. Individuals also need to be more discerning consumers of online content.
This is a challenging issue without easy solutions. The collision of advanced AI, social media, and the psychology of confirmation bias has created a perfect storm for the proliferation of online propaganda. Policymakers and tech companies have their work cut out for them.