Listen to the article
The Unsettling Rise of “Slopaganda” in Modern Warfare and Politics
In early March, a week after the first US-Israeli strikes on Iran, the White House released a video that blended actual American military operations with clips from popular movies, television series, video games, and anime. The unusual messaging strategy marked a new chapter in digital propaganda.
Not to be outdone, Iran and its supporters flooded social media with outdated war footage falsely presented as current conflict imagery, alongside AI-generated content depicting fictional attacks on Tel Aviv and US military installations in the Persian Gulf.
More recently, viral videos reportedly created by Iranian teams have depicted Donald Trump, Jeffrey Epstein, Satan, Benjamin Netanyahu, Pete Hegseth, and Ayatollah Khamenei as Lego figurines in bizarre narrative scenarios. These examples represent the emergence of what experts now call “slopaganda.”
The term “slopaganda,” coined in late 2023 in an academic paper published in Filisofiska Notiser, refers to AI-generated content that serves propagandistic purposes. Traditional propaganda aims to manipulate beliefs, emotions, attention and memory to achieve political goals. When powered by generative artificial intelligence, it transforms into something more pervasive and potentially more dangerous.
The situation has deteriorated more rapidly than researchers anticipated. In October 2023, US President Donald Trump posted an AI-generated video depicting himself piloting a fighter jet while wearing a crown and dumping feces on American protesters. He later shared another AI-generated video portraying his future presidential library as an enormous, gaudy skyscraper complete with a golden elevator.
Such content represents just the tip of the iceberg. Slopaganda isn’t limited to videos – it spans images, text, and any other media AI systems can generate.
Experts have identified several concerning aspects of this phenomenon. First, through repeated exposure across legacy and social media, slopaganda penetrates mental defenses, particularly when it captures attention through emotional manipulation and targets distracted audiences scrolling through social feeds.
Second, it effectively dilutes the “epistemic environment” – our collective knowledge sphere – with falsehoods and half-truths. While philosophers have argued that tools like ChatGPT can function as “bullshit machines” producing content indifferent to truth, slopaganda represents a specialized form of AI misinformation with distinct characteristics.
Unlike conventional misinformation, slopaganda often doesn’t aim to be believed literally. No reasonable person thinks Trump can pilot an F-16 fighter jet or that plastic Lego figurines are conspiring with Satan. Instead, these materials create emotional associations – connecting Trump with Satan, or the United States with evil – that bypass rational thought processes.
However, some slopaganda does mislead, either by design or through what scholars call “context collapse,” when jokes or trolling escape their intended context and are misinterpreted as serious content. During conflicts and emergencies, when authoritative information is scarce but demand for updates is high, misleading slopaganda can spread rapidly.
Psychological research shows that once misleading information enters someone’s mind, it becomes difficult to dislodge. Given slopaganda’s massive reach, even a small misleading effect multiplied across large populations can significantly impact group beliefs and decisions, potentially influencing election outcomes, protest movements, or public sentiment regarding unpopular conflicts.
Perhaps most troublingly, as slopaganda proliferates, it undermines trust in legitimate information. While people may become better at identifying AI-generated content, they will also increasingly misidentify authentic content as artificial. This erosion of trust in legitimate sources creates a nihilistic information environment where people increasingly believe whatever they find comforting or infuriating.
For societies already struggling with polarization amid economic, political, military, and environmental crises, the breakdown of shared truth sources will only exacerbate tensions.
Researchers recommend a three-pronged approach to address what they call the “slopaganda shitstorm.” First, individuals must develop digital literacy by recognizing AI-generated content markers, verifying sources, and blocking known slopaganda distributors rather than evaluating each piece of content in isolation.
Second, industry and regulators should implement technological solutions like AI content watermarking and consider removing certain content from platforms where people consume important information.
Finally, large technology companies including OpenAI, Google, and X (formerly Twitter) must be held accountable through taxation and other measures to fund regulatory efforts and digital literacy education.
While slopaganda appears to be an enduring feature of our digital landscape, experts believe that with foresight and coordinated action, society can adapt to and potentially control its most harmful manifestations.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
The blending of real and fictional elements in propaganda is quite unsettling. This shows how quickly the information landscape can become muddied and manipulated.
Agreed, it’s a troubling development that will require concerted efforts from governments, tech companies, and the public to address effectively.
The use of AI-generated content for propaganda purposes is deeply concerning. This underscores the importance of verifying sources and fact-checking information, especially online.
Agreed. Combating the spread of ‘slopaganda’ will require a multi-faceted approach involving technological, educational, and policy-based solutions.
The use of AI to spread disinformation is quite alarming. We need to be vigilant in verifying the authenticity of online content, especially regarding sensitive political and military topics.
Agreed. Strong media literacy and critical thinking skills are essential to navigate this landscape of manipulated information.
Interesting to see how propaganda efforts have evolved to incorporate AI-generated content. This ‘slopaganda’ certainly blurs the lines between truth and fiction in concerning ways.
I’m curious to see how this tactic evolves and whether audiences are able to discern real information from fabricated content.
This ‘slopaganda’ phenomenon highlights the challenges of maintaining truth and transparency in the digital age. Fact-checking and source verification will be crucial moving forward.
I wonder what policy and technological solutions could help mitigate the spread of this kind of AI-generated propaganda content.
This ‘slopaganda’ tactic is a stark reminder of the need for media literacy and critical thinking skills. Audiences must be able to discern truth from fiction, especially on sensitive topics.
Absolutely. Strengthening these capabilities will be crucial in the face of increasingly sophisticated disinformation campaigns.