Listen to the article
Russian AI-Driven Disinformation Campaign Targets Western Media, Experts Warn
A sophisticated Russian influence operation has been systematically creating fake videos impersonating major Western media outlets in an effort to spread pro-Kremlin narratives, according to cybersecurity experts and media watchdogs.
McKenzie Sadeghi, AI and foreign influence editor at NewsGuard, revealed that since early 2024, a group known as Storm-1679 has been “publishing pro-Kremlin content en masse in the form of videos” that mimic trusted news organizations.
“If even just one or a few of their fake videos go viral per year, that makes all of the other videos worth it,” Sadeghi explained in a recent interview. The operation appears to be playing a numbers game, creating numerous pieces of content in hopes that some will break through and reach mainstream audiences.
While Russian online influence operations are not new, security experts note that artificial intelligence tools have significantly raised the stakes by making fake content increasingly difficult to identify. Microsoft’s Threat Analysis Center has documented how Storm-1679 developed a distinct technique in 2024 that combines video footage with AI-generated audio impersonations of celebrity and expert voices.
One notable example emerged ahead of the 2024 Paris Olympics, featuring a fabricated documentary series complete with Netflix’s logo and an AI-generated deepfake of Tom Cruise’s voice as narrator. In December 2024, the group deployed similar tactics to create videos impersonating journalists, professors, and law enforcement officials with the apparent goal of undermining trust in NATO countries and Ukraine.
“They are just throwing spaghetti, trying to see what’s going to stick on a wall,” said Ivana Stradner, a Russia researcher at the Foundation for Defense of Democracies, a Washington-based think tank.
The timing of these campaigns is strategic, according to Sadeghi. “Timing and the news cycle play a big role in Storm-1679’s operations,” she noted. “It typically tends to surge and launch a wave of fakes around a particular news event,” targeting elections, sporting events, or developments in ongoing conflicts.
Though most videos fail to gain significant traction and are quickly debunked, occasional successes demonstrate the operation’s potential impact. In February, the group created a fabricated E! News video claiming the U.S. Agency for International Development had paid celebrities to visit Ukraine after Russia’s 2022 invasion. Despite being fake, the video was shared by high-profile figures including Donald Trump Jr. and Elon Musk to their millions of followers on X before being discredited.
“It’s problematic because if they fall for this, why would you expect someone else not to fall for this?” Stradner questioned, highlighting how even influential figures can inadvertently amplify false information.
E! News later confirmed to Reuters that the video was not authentic. A BBC spokesperson acknowledged awareness that Storm-1679 “impersonates BBC News and our journalists,” advising audiences to “check that any content posing as BBC journalism is on a BBC News platform.” ABC News, E! News, and Netflix did not immediately respond to requests for comment on the incidents.
The escalation of these disinformation efforts coincides with significant policy shifts in the United States. The Trump administration has scaled back federal agencies tasked with combating disinformation. Secretary of State Marco Rubio recently shuttered the State Department’s Counter Foreign Information Manipulation and Interference Office, formerly known as the Global Engagement Center, accusing it of spending “millions of dollars to actively silence and censor the voices of Americans.”
Similarly, the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency has halted its efforts to address misinformation related to U.S. elections.
These policy changes raise concerns among security experts. “Washington’s decision to scale back its information operations efforts is a dream come true for Putin,” Stradner warned, suggesting the timing could create a favorable environment for foreign influence campaigns to operate with less scrutiny.
Spokespeople for the State Department and CISA did not respond to requests for comment on these policy changes or their potential impact on countering Russian disinformation efforts.
As AI technology continues to advance, the challenge of identifying and countering such sophisticated operations grows increasingly complex, leaving media organizations, tech platforms, and individuals to navigate an evolving landscape of digital deception.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools

 
		

 
								
21 Comments
This is a prime example of how AI can be weaponized for malicious purposes. I hope the security experts can stay one step ahead and find ways to quickly identify and take down these fake videos.
Agreed. Combating AI-driven disinformation is going to require a multi-pronged approach involving tech companies, media, and governments working together.
I’m concerned about the growing threat of AI-powered disinformation campaigns. We need to invest in better detection and mitigation strategies to protect the integrity of our information landscape.
Absolutely. Promoting media literacy and providing the public with the tools to spot and debunk fake content should be a top priority.
Wow, the level of sophistication in these AI-generated fakes is really alarming. It’s going to be critical for media outlets and the public to develop better tools for detecting and debunking this kind of content.
It’s disturbing to think about the potential impact of these AI-generated fakes spreading across social media. We really need to strengthen media literacy and critical thinking skills to help the public navigate this challenge.
It’s disturbing to think about the potential impact of these AI-generated fakes spreading across social media. We really need to strengthen media literacy and critical thinking skills to help the public navigate this challenge.
This is really concerning. It’s getting harder and harder to trust what we see online these days. I hope cybersecurity experts can stay ahead of these AI-generated disinformation campaigns.
Agreed, the rise of AI-powered fake content is a major challenge for media credibility. We all need to be more vigilant about verifying sources and fact-checking claims.
This is a prime example of how AI can be weaponized for malicious purposes. I hope the security experts can stay one step ahead and find ways to quickly identify and take down these fake videos.
Agreed. Combating AI-driven disinformation is going to require a multi-pronged approach involving tech companies, media, and governments working together.
I’m not surprised Russia is behind this. They’ve been using online propaganda for years to sow discord and undermine Western institutions. We need robust safeguards to protect the integrity of our information ecosystem.
Absolutely. Tackling foreign disinformation has to be a top priority for policymakers and tech platforms. The stakes are too high to ignore this threat.
This is really concerning. It’s getting harder and harder to trust what we see online these days. I hope cybersecurity experts can stay ahead of these AI-generated disinformation campaigns.
Agreed, the rise of AI-powered fake content is a major challenge for media credibility. We all need to be more vigilant about verifying sources and fact-checking claims.
This is a really worrying development. The sophistication of these AI-generated fakes is truly alarming. I hope the relevant authorities can take swift action to address this threat.
I’m not surprised that Russia is behind this. They’ve been using online propaganda for years to sow discord and undermine Western institutions. We need to take this threat seriously and strengthen our defenses.
Agreed. Combating foreign disinformation campaigns should be a top national security priority. We need a coordinated, multi-stakeholder effort to protect our information ecosystem.
I’m not surprised Russia is behind this. They’ve been using online propaganda for years to sow discord and undermine Western institutions. We need robust safeguards to protect the integrity of our information ecosystem.
Absolutely. Tackling foreign disinformation has to be a top priority for policymakers and tech platforms. The stakes are too high to ignore this threat.
Wow, the level of sophistication in these AI-generated fakes is really alarming. It’s going to be critical for media outlets and the public to develop better tools for detecting and debunking this kind of content.