Listen to the article
Russian Propagandists Use AI and Fake Personas to Plant Disinformation in African Media
Russian operatives have successfully infiltrated mainstream African media outlets by exploiting artificial intelligence tools and fabricated academic personas, according to a detailed investigation by OpenAI and Code for Africa (CfA).
The sophisticated disinformation campaign centered around a fictitious academic named “Dr. Manuel Godsin,” who published dozens of articles across African news platforms promoting pro-Russian narratives while criticizing Ukraine, the United States, and the United Kingdom.
OpenAI first exposed the operation in its February 2026 report titled “Disrupting Malicious Uses of Our Models,” revealing that ChatGPT was being used to generate geopolitical commentary under the Godsin persona. CfA’s subsequent investigation confirmed that Godsin was entirely fabricated, with his profile photos traced to an individual in St. Petersburg, Russia.
“This is a textbook example of what we call a ‘paper person’ operation,” said a CfA analyst. “A synthetic identity given enough surface-level credibility to pass editorial gatekeeping in under-resourced newsrooms.”
The fictional Godsin claimed to hold a PhD from the University of Bergen and a master’s degree from the University of Oslo in International Crisis Management. Both institutions confirmed to investigators that no such person existed in their records, and the University of Oslo specifically noted they do not offer any program in International Crisis Management.
The operation’s reach proved extensive, with CfA identifying 42 articles by Godsin published 77 times across 27 websites in eight African countries. The majority of these publications—45 in total—appeared on platforms owned by South Africa’s Independent Media, the country’s third-largest media conglomerate.
Other outlets that published Godsin’s content included Nigeria’s Vanguard media group, Kenya’s CapitalFM, and the Angolan crime-journalism platform Na Mira do Crime. Most concerning, his articles also appeared on Microsoft’s MSN portal, which specifically prohibits “hoaxes, false information, propaganda, and deliberate misinformation.”
Meta identified and took action against a network of 37 Facebook accounts and 29 pages originating in Russia that amplified Godsin’s content while posing as local, grassroots news sources targeting audiences in Sub-Saharan Africa.
The operators took specific measures to hide the AI-generated nature of their content, including instructing ChatGPT to avoid using em dashes—a punctuation mark that appears unusually often in AI-generated text. They also switched between different AI models to bypass detection and safety guardrails.
Timeline analysis by CfA revealed that many Godsin articles closely followed themes first presented by African Initiative, a Moscow-based state-funded agency established in 2023 that presents itself as an “information bridge” between Russia and Africa.
One notable example involved a fabricated story claiming the U.S. was secretly providing humanitarian aid to Orania, an Afrikaner-only town in South Africa, despite cutting off broader aid to the country. Although Independent Media was later compelled to retract this article, it had already spread to YouTube channels and Nigerian forums, reaching millions.
“This illustrates how, when basic safeguards are absent or ignored, African news organizations can act as vectors for foreign information manipulation and influence,” said Sbu Ngalwa, secretary-general of The African Editors Forum. “Despite digital disruption resulting in smaller, hollowed-out newsrooms, the need for fact-checking cannot be overstated.”
The investigation underscores a growing trend of state actors using AI to generate and disseminate disinformation through legitimate media channels. The technique mirrors similar “information laundering” methods previously employed by Chinese state agencies in the early 2020s.
By successfully placing fabricated stories in mainstream outlets, these operations confer a sense of legitimacy to propaganda that persists even after retractions. The result is a sophisticated information laundering loop: generate content, place it in legitimate African media, then republish on Russian-aligned platforms to amplify credibility.
As newsrooms continue to face resource constraints and mounting pressure to publish quickly, vulnerability to such operations may increase unless stronger verification protocols are implemented.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
It’s concerning to see how advanced these propaganda techniques have become, leveraging AI and synthetic identities. I wonder what the long-term impacts of this kind of coordinated disinformation campaign could be on public discourse and trust in media, especially in vulnerable regions like Africa.
It’s alarming to see how sophisticated these propaganda tactics have become, using AI to generate content under fabricated identities. I wonder how prevalent this kind of activity is and what can be done to better detect and counter these ‘paper person’ operations.
You’re right, this highlights the need for more rigorous verification and fact-checking, especially in media outlets that may lack the resources to thoroughly vet sources. Increased transparency and cooperation between tech companies, journalists, and researchers will be crucial.
This is a clear example of the dark side of AI and its potential for abuse. Using language models to generate fake personas and manipulate the media is a serious threat to democracy and truth. I hope robust safeguards can be developed to prevent these kinds of malicious uses of AI technology.
I’m curious to learn more about the specific tactics used by the Russian operatives here. How were they able to bypass editorial controls and successfully plant these stories? And what can be done to improve the resilience of African media against these kinds of foreign influence campaigns?
This is a really troubling development. The use of AI-generated content and fake personas to sow disinformation is a worrying trend that we need to stay vigilant about. I hope the relevant authorities can take strong action to disrupt these operations and protect the integrity of the media landscape.
Wow, this is a really eye-opening investigation. I’m impressed by the level of sophistication in this Kremlin-linked disinformation campaign, but also deeply concerned about the implications. We need to find ways to better protect the integrity of journalism and public discourse from these kinds of synthetic influence operations.
Wow, this is a really concerning case of disinformation and manipulation. Using AI to create fake personas and plant stories in African media is a serious abuse of technology. I hope authorities can crack down on these Kremlin-linked operations and hold them accountable.