Listen to the article

0:00
0:00

In early March, a disquieting new phenomenon emerged in the digital landscape of international conflicts. Following the first US-Israeli strikes on Iran, the White House released a video that blended actual military footage with clips from movies, TV shows, video games, and anime. This prompted Iran and its allies to counter with their own media assault, flooding social platforms with outdated war footage falsely presented as current conflict imagery, alongside AI-generated content depicting fictional attacks on Tel Aviv and US military installations in the Persian Gulf.

More recently, the digital battlefield has seen viral videos reportedly created by Iranian teams featuring Donald Trump, Jeffrey Epstein, Satan, Benjamin Netanyahu, and other prominent figures depicted as Lego figurines in bizarre scenarios. These developments mark the rise of what experts are now calling “slopaganda.”

The term “slopaganda,” coined last year by researchers Mark Alfano and Michał Klincewicz in a paper published in Filosofiska Notiser, describes AI-generated content created specifically for propaganda purposes. Traditional propaganda aims to manipulate beliefs, emotions, and cognitive processes for political ends. When powered by generative AI, it transforms into something potentially more insidious and harder to combat.

The researchers note that the proliferation of this content has exceeded even their initial concerns. One notable example includes an AI-generated video allegedly posted by former US president Donald Trump that depicted him piloting a fighter jet while wearing a crown and dropping excrement on American protesters. Another showed an exaggerated vision of his presidential library as an ostentatious golden skyscraper.

What makes slopaganda particularly effective is its ability to bypass normal cognitive defenses. It captures attention through emotionally charged content delivered to distracted audiences scrolling through social media feeds. Unlike traditional propaganda, slopaganda often doesn’t try to appear realistic—no one actually believes Trump can pilot an F-16 fighter jet or that plastic Lego figures are conspiring together.

Instead, this content works by creating symbolic associations: connecting Trump with Satan, the United States with evil, and so on. The power lies not in convincing people of literal truths but in establishing emotional connections and reinforcing existing biases.

More concerning is the dilution of what researchers call our “epistemic environment”—the shared information space that helps societies function. By flooding this space with falsehoods and distortions, slopaganda makes it increasingly difficult to distinguish genuine information from fabrication.

During crises when authoritative information is scarce, misleading slopaganda can spread rapidly. Once false information enters someone’s mind, it’s notoriously difficult to dislodge. Even if only a small percentage of viewers are misled, the massive reach of social platforms means the overall effect on public discourse can be significant.

The researchers identify another troubling consequence: as people become more aware of AI-generated content, they may overcorrect and mistakenly dismiss authentic content as fake. This erosion of trust in legitimate information sources can lead to a nihilistic information environment where people simply choose to believe whatever confirms their existing views or triggers desired emotional responses.

In societies already struggling with political polarization and multiple crises, the breakdown of shared factual foundations only exacerbates existing tensions.

The researchers propose a three-pronged approach to address this growing threat. First, individuals must develop stronger digital literacy skills, learning to identify AI-generated content and verify information through multiple reliable sources. Second, the technology industry and regulators should implement technical solutions such as digital watermarking for AI-generated content. Finally, large tech companies that have enabled this phenomenon should be held accountable through taxation and regulatory frameworks that fund both oversight and digital literacy education.

While slopaganda may be impossible to eliminate entirely in our increasingly AI-powered information ecosystem, the researchers believe that with appropriate foresight, education, and regulatory action, societies can adapt to this challenge before it fundamentally undermines our shared reality.

As conflicts continue to play out in both physical and digital realms, the ability to navigate this new information landscape may prove crucial for maintaining functional democracies and informed citizenry.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. This ‘slopaganda’ phenomenon highlights the dark potential of AI and synthetic media. Disinformation campaigns like this pose a serious threat to informed decision-making and democratic discourse. Vigilance and media literacy are essential.

  2. Olivia Miller on

    Fascinating look at the rise of AI-powered propaganda tactics. It’s concerning to see how easily deepfakes and synthetic media can be weaponized for disinformation campaigns. Careful analysis is needed to uncover the truth behind these manipulated visuals.

    • You’re right, it’s a worrying trend that will likely only escalate. Maintaining media literacy and being skeptical of online content is so important these days.

  3. Elijah Williams on

    This ‘slopaganda’ tactic is a troubling new front in the information wars. The ability to create convincing yet fake visuals is a real challenge for fact-checkers and the public. We’ll need robust solutions to combat this effectively.

    • Olivia E. Brown on

      Agreed, the proliferation of AI-generated propaganda is a major threat to informed discourse. Strengthening digital literacy and media authentication tools will be crucial in the years ahead.

  4. The use of AI to create propaganda videos is a disturbing tactic. While the technology has many positive applications, bad actors are clearly finding ways to misuse it for political gain. Fact-checking and media literacy will be key to combating this.

    • William Martinez on

      Absolutely. The ability to generate fake yet convincing visuals is a real challenge. We’ll need robust solutions to verify the authenticity of media content going forward.

  5. Noah J. Taylor on

    Geopolitical tensions amplified by AI-powered disinformation – this is a concerning development. It’s a reminder that we must be vigilant consumers of online content and fact-check before believing or sharing anything.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.