Listen to the article
A pro-Russia disinformation campaign is rapidly expanding its reach by exploiting consumer artificial intelligence tools, according to new research published last week. The operation has dramatically increased its content production to amplify divisive narratives around global elections, Ukraine, immigration, and other contentious issues.
The campaign—variously identified as Operation Overload, Matryoshka, or Storm-1679 by different research groups—has been active since 2023 and has been linked to the Russian government by multiple organizations, including Microsoft and the Institute for Strategic Dialogue. Its primary strategy involves creating fake media outlets to disseminate false narratives designed to exacerbate societal divisions in democratic countries.
While the operation targets audiences worldwide, including the United States, researchers have identified Ukraine as the main focus. Hundreds of AI-manipulated videos have been distributed to promote pro-Russian narratives throughout the region.
The joint report from Reset Tech, a London-based nonprofit tracking disinformation campaigns, and Check First, a Finnish software company, reveals a striking increase in content production. Between July 2023 and June 2024, researchers identified 230 unique pieces of content promoted by the campaign. However, in just the eight months between September 2024 and May 2025, that number exploded to 587 unique content pieces, with the majority created using AI tools.
“This marks a shift toward more scalable, multilingual, and increasingly sophisticated propaganda tactics,” the researchers wrote. The report highlights how easily accessible, free online AI tools have enabled what they call “content amalgamation”—a technique where operators produce multiple content pieces pushing identical narratives by leveraging artificial intelligence.
The campaign’s content has received millions of views globally, raising concerns about its potential impact on public discourse and democratic processes. The researchers noted the increasingly sophisticated nature of these operations, which adapt quickly to emerging technologies and platforms.
Aleksandra Atanasova, lead open-source intelligence researcher at Reset Tech, expressed surprise at the diversity of content being produced. “What came as a surprise to me was the diversity of the content, the different types of content that they started using,” she told WIRED. “It’s like they have diversified their palette to catch as many different angles of those stories. They’re layering up different types of content, one after another.”
The campaign’s content portfolio includes manipulated images, videos, QR codes, and fraudulent websites—all designed to appear legitimate while spreading disinformation. This multi-format approach allows the operation to reach different audience segments and circumvent content moderation systems on various platforms.
The findings come amid growing concerns about AI’s role in amplifying disinformation, particularly during election cycles. As AI tools become more accessible and sophisticated, the barrier to entry for creating convincing fake content continues to lower, presenting significant challenges for social media platforms, fact-checkers, and government agencies attempting to combat foreign influence operations.
Security experts warn that such campaigns represent a new frontier in information warfare, where the volume and convincing nature of AI-generated content can overwhelm traditional verification methods. The Russian-linked campaign demonstrates how state actors can leverage commercially available AI to achieve geopolitical objectives without developing proprietary technology.
As global tensions persist around the ongoing conflict in Ukraine and upcoming elections in several democratic nations, cybersecurity and disinformation researchers anticipate that such operations will continue to evolve and intensify their activities, exploiting social divisions and undermining trust in institutions through increasingly sophisticated technological means.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


15 Comments
Exploiting AI tools to rapidly produce fake content is a worrying new frontier in the war on disinformation. We need robust safeguards and accountability measures.
Targeting Ukraine with pro-Russian narratives is a concerning development. Disinformation campaigns are a serious threat to democratic societies worldwide.
Targeting Ukraine with divisive narratives is a classic Russian tactic. Disinformation is a dangerous weapon, and we must do more to counter it and protect democratic discourse.
Absolutely. Shining a light on these operations is an important first step, but much stronger action is required to limit the damage.
While the scale of this operation is alarming, I’m encouraged to see researchers exposing these tactics. Continued vigilance and collective action are crucial to combat disinformation.
Absolutely. Transparency and public awareness are essential to building resilience against manipulative propaganda campaigns.
The scale of this pro-Russia campaign is alarming. Exploiting AI to rapidly produce fake content and target vulnerable audiences is a worrying trend we must address.
Agreed, this highlights the urgent need for better regulation and oversight of AI tools to prevent misuse by bad actors.
This report highlights the urgent need for stronger regulation and oversight of AI tools to prevent them from being abused by bad actors. The threat is real and growing.
The use of AI to rapidly produce and amplify disinformation is a disturbing trend. We must remain vigilant and do more to counter these malicious efforts.
While the use of AI to amplify propaganda is alarming, I’m glad to see researchers exposing these tactics. Ongoing vigilance and public awareness are crucial.
Yes, transparency and public education are key to building resilience against manipulative disinformation campaigns.
Concerning to see AI tools being exploited to amplify disinformation and sow discord. We need to be vigilant about the growing threat of state-sponsored propaganda online.
The rapid expansion of this pro-Russia campaign is very concerning. We need a coordinated global response to combat the threat of state-sponsored disinformation.
It’s disturbing to see how advanced these disinformation tactics have become. We must stay vigilant and continue to call out and counter these malicious efforts.