Listen to the article
Russia’s AI-Powered Disinformation Campaign Targets African Media
Russian operatives have deployed sophisticated artificial intelligence tools to create a fictional academic expert as part of a coordinated disinformation campaign across multiple African nations, according to reports from major tech companies and digital watchdog organizations.
The scheme centered around a fabricated geopolitical commentator named “Dr. Manuel Godsin,” whose AI-generated articles promoting pro-Kremlin narratives appeared in mainstream media outlets across at least eight African countries, including Angola, Ghana, Kenya, South Africa, Mali, Nigeria, Togo, and Uganda.
OpenAI, the company behind the popular ChatGPT application, revealed it received intelligence from Meta about Russian operatives using its language model to generate content. “The ChatGPT account’s main activity was generating social media posts and long-form commentary articles about geopolitics in sub-Saharan Africa,” OpenAI stated in its February report on malicious platform uses. “The user mainly prompted in English but sometimes input Russian-language instructions that they attributed to their manager.”
Meta, which owns Facebook, reported dismantling 37 accounts and 29 pages involved in the operation for violating its policies on “Coordinated Inauthentic Behavior.” The company noted that the network leveraged AI-generated content to appear authentic to local audiences, including profile photos and promotional materials.
Code for Africa (CfA), a data journalism nonprofit, conducted an independent investigation that verified the Russian influence operation. Their analysis uncovered 38 pieces of manipulated content that were published 73 times across 27 different websites throughout the region.
The fabricated persona of “Dr. Godsin” was meticulously crafted. In author biographies distributed to African media outlets, Godsin was described as holding advanced degrees from prestigious Scandinavian universities. According to his profile, he possessed a master’s degree in international crisis management from the University of Oslo and a doctorate from the University of Bergen.
However, investigators found no evidence of his existence at either institution. The University of Oslo confirmed it offers no programs in international crisis management and had no record of any student by that name. Similarly, researchers could not locate him in the University of Bergen’s library records.
The deception extended beyond academic credentials. Through reverse-image searches, CfA identified that the photograph attributed to Godsin actually belonged to a Russian law student named Mikhail Malyarov Yurievich, who had posted his picture on a legal networking site in the 2010s. Additionally, although Godsin was presented as an author of several books, none could be found in any catalogues or databases.
The African Digital Democracy Observatory (ADDO) noted that this operation mimics “information laundering” and “paid punditry” techniques previously employed by Chinese state agencies in the early 2020s. Their March 17 report highlighted that the content was “amplified by fake think tank websites, with some of the articles republished on global platforms, such as the Microsoft-owned global news portal MSN, as supposed credible ‘expert analysis.'”
The investigation revealed connections to a broader Russian influence apparatus targeting the African continent. “The Godsin operation appears interwoven with a broader Kremlin-aligned propaganda machine targeting Africa,” ADDO concluded. “A central node in that ecosystem is African Initiative, a Moscow-based state-funded agency focused on Africa, which was launched in 2023.”
Analysis by CfA showed that mainstream news websites frequently published Godsin’s commentaries shortly after African Initiative posted similar content on the same topics, suggesting a coordinated distribution strategy.
The operation represents a concerning evolution in disinformation tactics, with potential ramifications beyond the immediate spread of false narratives. “The planting of misinformation, and in some cases of clear disinformation, in mainstream media is not only about pushing a particular narrative,” ADDO warned. “It is also an attack on the integrity of the news ecosystem, with a concomitant effect on trust in news media that serves the ends of actors intent on destroying the integrity of information in general.”
This sophisticated campaign highlights the growing challenge media organizations face in verifying expert sources and content authenticity in an era where artificial intelligence can generate increasingly convincing material designed to manipulate public opinion across national borders.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


16 Comments
The use of AI and fabricated experts to spread propaganda is a concerning trend that undermines the integrity of information. We must be diligent in verifying sources and calling out these deceptive tactics wherever they emerge.
The revelation that Russian operatives are leveraging AI to generate fake content and personas is deeply concerning. This underscores the need for robust media scrutiny and fact-checking to expose these deceptive tactics and protect the integrity of information.
Agreed. As AI technology becomes more advanced, we must be proactive in developing effective countermeasures to identify and counter these types of disinformation campaigns.
This is really concerning. Using AI to spread disinformation and propaganda is a worrying trend. We need to be vigilant about verifying information sources, especially on sensitive geopolitical topics.
Agreed. AI-generated content can be highly convincing, so it’s crucial that media outlets and readers exercise extra scrutiny to identify and counter these types of coordinated influence campaigns.
The revelation that Russian operatives are using AI and fabricated experts to spread propaganda across African media is deeply troubling. This underscores the urgent need for robust media oversight and fact-based reporting to counter these deceptive tactics.
The Kremlin’s deployment of AI and fake personas to amplify pro-Russian narratives in African media is a blatant attempt to undermine the region’s information landscape. We must remain vigilant and support initiatives that promote media literacy and fact-based reporting.
Absolutely. Countering this type of coordinated disinformation campaign will require a multi-faceted approach, including strengthening journalistic standards, empowering civil society, and investing in digital literacy programs.
The use of fake personas and AI-generated content to push pro-Kremlin narratives across African media is a blatant attempt to manipulate public opinion. We must call out these deceptive tactics and ensure transparent, fact-based reporting.
Absolutely. It’s disturbing to see how sophisticated these disinformation efforts have become. Strengthening media literacy and fact-checking will be key to combating the spread of this kind of propaganda.
This is a prime example of how authoritarian regimes are exploiting emerging technologies to manipulate public discourse. The Kremlin’s use of AI-generated content to spread propaganda across Africa is a worrying development that deserves widespread condemnation.
This is a troubling development in the ongoing information war. The Kremlin’s use of AI and fabricated experts to sway African audiences is a worrying escalation of their propaganda efforts. We must remain vigilant and support independent, ethical journalism.
The Kremlin’s use of AI and fake personas to amplify pro-Russian narratives in African media is a brazen attempt to manipulate public opinion. This serves as a stark reminder of the importance of media literacy and the need to be vigilant about verifying information sources.
Absolutely. Combating this type of coordinated disinformation campaign will require a multifaceted approach, including strengthening journalistic standards, empowering civil society, and investing in digital literacy programs.
This report highlights the growing sophistication of authoritarian regimes in leveraging emerging technologies to manipulate public discourse. The Kremlin’s use of AI-generated content and fake personas to disseminate pro-Russian narratives in Africa is a worrying development that deserves close scrutiny.
Exactly. Combating this type of coordinated disinformation campaign will require a concerted effort by media outlets, tech companies, and civil society to strengthen media literacy, fact-checking, and transparent reporting.