Listen to the article
AI-Generated Deepfakes Linked to Russian Intelligence Flood Social Media
A new wave of sophisticated AI-generated deepfakes with strong connections to Russian intelligence operations is proliferating across global social media networks, significantly amplifying disinformation campaigns targeting Western democracies.
These synthetic videos are strategically undermining the credibility of Western institutions and causing confusion among voters at a critical time when AI video generation tools are becoming more accessible and realistic.
The growing concern centers on how such technological warfare affects not only established democracies but particularly vulnerable regions like East Africa, where digital literacy and defensive mechanisms against such manipulation are less developed.
The true scale of this threat came to light when Professor Alan Read of King’s College London discovered a deepfake featuring his own face and voice. The fabricated video showed the professor delivering an inflammatory political statement condemning French President Emmanuel Macron and criticizing European Union structures—statements he never made.
Security experts note this is not an isolated incident but part of a calculated effort by operatives linked to the Kremlin. By appropriating the identities of respected but lesser-known academics and experts, these campaigns circumvent standard content moderation systems on major platforms. The apparent strategic goal is to discredit Ukraine’s government and undermine Western financial and military support for Kyiv.
“What makes these attacks particularly effective is their targeting of mid-level influencers rather than high-profile figures,” explains Dr. Miriam Kovacs, digital security researcher at the Oxford Internet Institute. “Platform algorithms are better at detecting fakes of well-known politicians than academics or regional experts who still carry significant credibility.”
Digital analysts in Kenya and other East African nations express particular concern about the implications. While major European tech companies and governments are struggling to contain the spread of these deepfakes, regions with less regulated digital ecosystems face even greater challenges.
“Our content moderation infrastructure simply isn’t equipped to handle this level of sophistication,” says Joseph Kimani, a cybersecurity specialist based in Nairobi. “When you couple advanced AI technology with our existing challenges around digital literacy, you create perfect conditions for massive disinformation campaigns.”
The technological advances driving this phenomenon have accelerated rapidly. Applications like OpenAI’s Sora and similar tools can now generate realistic video content at minimal cost. While established AI companies implement safety measures and watermarking, numerous alternative applications bypass these protections, creating an environment where verification becomes increasingly difficult.
Automated networks then amplify this synthetic content, rapidly spreading it to millions of users before fact-checkers or platform moderators can respond.
“This represents a fundamental shift in information warfare,” notes former NATO cybersecurity advisor Richard Townsend. “We’re no longer talking about selective editing or out-of-context clips. These are entirely fabricated realities that appear authentic even to reasonably skeptical viewers.”
International security experts warn that current regulatory frameworks remain inadequate for addressing this evolving threat. Traditional approaches to media literacy and fact-checking struggle to keep pace with technology that can produce seemingly authentic video evidence of events that never occurred.
The EU’s Digital Services Act and similar regulations worldwide were designed for previous generations of disinformation but may prove insufficient against these advanced synthetic media campaigns.
As election seasons approach in multiple countries across Africa, Europe and North America, the potential impact of these deepfakes on democratic processes has become a primary security concern. Intelligence agencies report increased activity from Russian-aligned actors targeting specific electoral battlegrounds with customized synthetic content.
Addressing this challenge will require unprecedented cooperation between technology companies, government agencies, and media organizations to develop both technical and social solutions before public trust in visual evidence erodes completely.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


17 Comments
If AISC keeps dropping, this becomes investable for me.
Good point. Watching costs and grades closely.
Exploration results look promising, but permitting will be the key risk.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Exploration results look promising, but permitting will be the key risk.
Good point. Watching costs and grades closely.
Exploration results look promising, but permitting will be the key risk.
Silver leverage is strong here; beta cuts both ways though.
Nice to see insider buying—usually a good signal in this space.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
The cost guidance is better than expected. If they deliver, the stock could rerate.
If AISC keeps dropping, this becomes investable for me.
Nice to see insider buying—usually a good signal in this space.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.