Listen to the article

0:00
0:00

In a significant shift within online propaganda tactics, major state-sponsored influence campaigns have widely adopted artificial intelligence tools, though with notably poor execution and limited impact, according to a new analysis by social media analytics firm Graphika.

The report examined nine ongoing influence operations, including those allegedly tied to the Chinese and Russian governments, finding that these campaigns increasingly use generative AI for creating images, videos, text, and translations – mirroring broader social media trends.

Researchers discovered that propaganda campaigns now rely on AI for core functions such as content creation and developing synthetic social media personas. While this has streamlined operations, the resulting content is often conspicuously low quality and generates minimal engagement with target audiences.

These findings challenge earlier predictions from experts who warned that authoritarian regimes would leverage increasingly sophisticated generative AI to produce highly convincing synthetic content capable of deceiving even discerning audiences in democratic societies.

“Influence operations have been systematically integrating AI tools, and a lot of it is low-quality, cheap AI slop,” said Dina Sadek, a senior analyst at Graphika and co-author of the report. She noted that despite new technology, these campaigns continue to struggle with the same problem they faced before AI adoption – their posts on Western social media platforms receive little to no attention.

The researchers categorically found that AI-generated content from these established campaigns exhibits obvious flaws, ranging from unconvincing synthetic news anchors in YouTube videos to awkward translations and fake news websites that inadvertently include AI prompts in their headlines.

Online influence campaigns targeting American politics date back at least a decade, most notably with the Russia-based Internet Research Agency’s attempts to influence the 2016 presidential election through fabricated social media accounts.

Sadek explained that AI hasn’t revolutionized online propaganda but has made certain tasks easier to automate. “It might be low-quality content, but it’s very scalable on a mass scale. They’re able to just sit there, maybe one individual pressing buttons there, to create all this content,” she said.

The report cites specific examples, including “Doppelganger,” an operation allegedly linked to the Kremlin by the U.S. Justice Department, which researchers say used AI to create unconvincing fake news websites. Another example, “Spamoflauge,” which the Justice Department has connected to China, creates synthetic news personalities to spread divisive content on platforms like X (formerly Twitter) and YouTube.

Multiple operations were found using poor-quality deepfake audio. One campaign published deepfakes of celebrities like Oprah Winfrey and former President Barack Obama apparently commenting on India’s rising global influence, but the videos were unconvincing and gained little traction.

Another pro-Russia video titled “Olympics Has Fallen” appeared designed to criticize the 2024 Paris Summer Olympics. The video featured an AI-generated version of Tom Cruise, despite the actor having no connection to the similarly named 2013 Hollywood film. Researchers noted this content primarily circulated among accounts that typically share similar propaganda materials.

When contacted, representatives for China’s embassy in Washington, Russia’s Foreign Affairs Ministry, X and YouTube did not respond to requests for comment.

Despite their limited direct impact on human audiences, these propaganda efforts may serve another purpose in the age of AI chatbots, Sadek suggested. As companies continuously train their AI models by scraping internet content, flooding online spaces with propaganda could potentially influence these systems.

This concern appears validated by recent findings from the Institute for Strategic Dialogue, a pro-democracy nonprofit, which discovered that major AI chatbots frequently cite state-sponsored Russian news outlets – including some sanctioned by the European Union – in their responses to user queries.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

9 Comments

  1. Interesting to see how propaganda campaigns are adopting AI tools for content creation. While it may streamline operations, the quality and impact seem quite limited so far. I wonder if we’ll see more sophisticated uses of AI for propaganda in the future, or if the challenges will persist.

    • You raise a good point. The low quality of the AI-generated content suggests these campaigns are still struggling to leverage the technology effectively. It will be important to monitor developments closely.

  2. The adoption of AI tools by propaganda campaigns is a concerning development, even if the current impact is limited. We need to remain vigilant and continue studying these tactics to stay ahead of potential advancements.

  3. This report highlights the need for robust media literacy and critical thinking skills to combat the spread of AI-generated propaganda online. While the quality may be low now, we can’t afford to be complacent about the potential threat.

    • Absolutely. Educating the public about the risks of AI-powered disinformation is crucial. We must empower people to identify and resist even the most sophisticated synthetic content in the future.

  4. It’s interesting to see how propaganda tactics are evolving with the use of AI. While the quality seems lacking now, the potential for more sophisticated synthetic content in the future is worrying. Ongoing research and public awareness will be key.

  5. It’s good to see researchers closely examining the use of AI in propaganda campaigns. Even if the current results are subpar, the broader trend of integrating AI tools is troubling and warrants continued scrutiny.

  6. William Hernandez on

    This is an important issue to keep an eye on. The use of AI in propaganda campaigns is concerning, even if the current execution seems lacking. We need to stay vigilant and understand the evolving tactics used by bad actors online.

    • Jennifer T. Jones on

      I agree. While the impact may be limited now, the potential for more advanced AI-powered propaganda in the future is worrying. Ongoing research and public awareness will be critical to addressing this challenge.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.