Listen to the article
Western sanctions targeting Russia’s prominent state-controlled media outlets like RT and Sputnik came swiftly after the invasion of Ukraine, but Moscow has found more elusive ways to spread its narratives across global information spaces.
These alternative propaganda mechanisms operate with significant advantages over their more conspicuous counterparts, according to cybersecurity experts tracking Russian information operations.
“Unlike traditional Russian propaganda outlets like media companies RT and Sputnik, which the West swiftly sanctioned at the start of the invasion of Ukraine, such campaigns allow for a level of plausible deniability that complicates counter-influence efforts,” said Sophie Williams-Dunning, a cyber and technology researcher at the Royal United Services Institute (RUSI), a prominent UK-based defense and security think tank.
Williams-Dunning’s assessment highlights a growing challenge for Western governments and tech platforms: Russian influence operations have evolved beyond easily identifiable state media channels into more sophisticated, less attributable networks of influence.
These covert campaigns typically employ a mix of tactics including the use of proxy websites that appear independent, amplification through social media accounts without clear Russian ties, and the strategic deployment of content through third-party channels that can maintain distance from the Kremlin.
The sanctions imposed on RT and Sputnik—which included broadcast bans in the European Union and removal from major social media platforms in many Western markets—were initially seen as significant blows to Russia’s information warfare capabilities. However, security analysts now observe that these measures may have simply accelerated Moscow’s shift toward less transparent methods of narrative dissemination.
Russia’s adaptable approach to information operations has particularly flourished in regions where anti-Western sentiment already exists, including parts of Africa, Latin America, and Asia. In these areas, content originating from Russian sources often travels through complex networks of local media outlets, social media influencers, and websites that present themselves as independent news sources.
“What makes these operations particularly effective is their ability to blend into existing information ecosystems,” a cybersecurity expert from a major tech platform explained, speaking on condition of anonymity. “When content appears to come from local sources rather than directly from Moscow, it naturally faces less scrutiny and skepticism from audiences.”
The European Union’s foreign policy arm, the European External Action Service, has documented hundreds of instances where Russian narratives have been laundered through seemingly unrelated sources, creating what they term “information laundering” chains that obscure the original source of misleading claims.
Western intelligence agencies have identified several sophisticated Russian information campaigns since 2022 that operated through shell companies, contracted marketing firms, and even seemingly legitimate news aggregation services—all designed to maintain that crucial “plausible deniability” Williams-Dunning referenced.
Tech companies have responded by enhancing their threat intelligence capabilities, with platforms like Meta and Google implementing more sophisticated detection systems for coordinated inauthentic behavior. However, the cat-and-mouse game continues as Russian operators adapt their methods.
For policymakers, the challenge extends beyond simple content moderation. Addressing these more ambiguous influence operations requires balancing security concerns against free speech principles and avoiding overly broad definitions of “foreign influence” that could impact legitimate political discourse.
The evolution of Russian propaganda tactics reflects a broader trend in modern information warfare, where attribution becomes increasingly difficult and the lines between authentic domestic discourse and foreign manipulation grow blurrier. As Williams-Dunning and other researchers at institutions like RUSI continue to track these developments, both governments and platforms face mounting pressure to develop more nuanced approaches to counter-influence operations without undermining democratic values.
The problem remains particularly acute as global attention to the Ukraine conflict fluctuates, potentially creating openings for Russian narratives to gain traction in periods of reduced vigilance.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


7 Comments
This highlights the need for continued vigilance and innovation in the fight against online disinformation. Policymakers and tech companies must stay ahead of the curve as these tactics evolve.
Absolutely. Keeping pace with the rapid development of these AI-driven disinformation campaigns will require sustained, collaborative efforts from all stakeholders.
The article raises important questions about the role of AI in modern information warfare. I hope further research and transparency can shed light on these emerging tactics and inform effective countermeasures.
While the use of AI to enable more sophisticated propaganda is worrying, I’m curious to learn more about the specific tactics and techniques being employed. A deeper understanding could inform more effective countermeasures.
It’s troubling to see Russia leveraging AI for covert influence campaigns. Countering this will require close coordination between governments, tech companies, and civil society to expose and disrupt these networks.
Agreed. Maintaining a free and open information space is critical, which means addressing the challenge of AI-powered disinformation head-on.
This is concerning, as AI technology can be exploited to spread disinformation more effectively. We need robust safeguards and transparent reporting to combat these evolving propaganda tactics.