Listen to the article

0:00
0:00

Russian Operatives Deploy American AI Tools for Information Campaigns in Africa, OpenAI Reports

In a striking example of technological hypocrisy, Kremlin-linked actors are secretly utilizing American artificial intelligence systems to wage sophisticated influence campaigns across Africa, even as Moscow publicly touts its technological self-sufficiency and rejects Western digital infrastructure, according to a new report from OpenAI.

The AI research company’s latest investigation, “Disrupting Malicious Uses of Our Models,” details how Russian operatives have leveraged ChatGPT and other OpenAI products to generate fabricated content aimed at manipulating public opinion in several African nations. The report specifically outlines two major influence operations with apparent Russian origins.

The first operation, codenamed “Fish Food,” involved the Rybar network, one of the largest information channels operating in service to Kremlin interests. Through a major Telegram channel and other platforms, Rybar members systematically employed ChatGPT to draft articles, create social media posts, and even generate accompanying comments—essentially establishing what researchers describe as an AI-powered “content farm” for disinformation.

Content production targeted both domestic Russian and international audiences, with materials generated in multiple languages including English and Spanish. The operation gained considerable traction online, with one ChatGPT-generated tweet from Rybar’s account receiving over 150,000 views.

“The Rybar team effectively transformed AI tools into an assembly line for coordinated disinformation,” noted the report. “Their content reflected typical patterns seen in covert Russian information campaigns.”

The operation particularly focused on Africa, with prompts addressing information campaigns regarding the Democratic Republic of the Congo, electoral processes in Burundi and Cameroon, and campaign scenarios for Madagascar—including specific strategies for inciting protests.

The second operation, codenamed “No Bell,” followed similar patterns but operated with greater sophistication. OpenAI blocked accounts generating analytical articles and social media content on African geopolitics that were later published by Facebook pages posing as legitimate news outlets in South Africa, Ghana, Kenya, and Angola.

The materials consistently promoted pro-Russian narratives while criticizing Ukraine, the United Kingdom, and the United States. They included personalized attacks on Ukrainian President Volodymyr Zelenskyy and US President Donald Trump, alongside more targeted regional content addressing issues like allegations of German arms manufacturer Rheinmetall using South African subsidiaries to circumvent export controls.

To enhance credibility, these articles were often published under fabricated journalist identities. Operators specifically requested ChatGPT to write in the style of a “real, living journalist” and to remove stylistic markers that might identify the text as AI-generated, such as long dashes.

The revelations come at a critical moment as African nations increasingly find themselves at the center of geopolitical competition between Russia, China, and Western powers. Russia has intensified its presence on the continent in recent years through military contractors, diplomatic initiatives, and now, sophisticated information operations.

“These examples directly demonstrate Russia’s strategy of leveraging every available technology to maintain influence and destabilize regions,” explained the report. “While combat may be limited to Ukrainian territory, Moscow is conducting operations across the globe, with information warfare particularly active in Africa.”

The findings highlight a concerning trend in the misuse of generative AI technologies. While companies like OpenAI design these systems for beneficial applications, malicious actors can repurpose them for propaganda, misinformation, and election interference—sometimes circumventing the very countries that developed the technology.

Security experts suggest this pattern of exploitation will likely accelerate as AI systems become more sophisticated and accessible, creating new challenges for technology companies, governments, and civil society organizations working to preserve information integrity.

For African nations specifically, the targeting of their information ecosystems represents yet another battleground where external powers compete for influence without regard for local democratic processes or social stability.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments

  1. Emma H. Smith on

    This is a concerning report. Leveraging AI tools to wage disinformation campaigns is a troubling development. I hope the international community can work to address these sorts of malicious activities and protect the integrity of information in Africa and elsewhere.

    • Lucas Thomas on

      I agree, the use of AI to generate fake content is a worrying trend. Transparency and accountability around the development and deployment of these technologies will be crucial going forward.

  2. Lucas V. Taylor on

    While the report’s findings are alarming, I’m glad that organizations like OpenAI are working to expose and disrupt these kinds of malicious uses of their technology. Maintaining transparency and accountability around AI development and deployment will be essential going forward.

    • Lucas K. Hernandez on

      Yes, the proactive stance taken by OpenAI is commendable. Collaborative efforts between tech companies, researchers, and policymakers will be crucial in staying ahead of bad actors trying to exploit these powerful tools for nefarious ends.

  3. Noah A. Martin on

    I’m curious to know more about the specific tactics and scale of these Russian disinformation campaigns in Africa. What kind of content were they generating, and how effective were their efforts at manipulating public opinion? It’s an important issue that deserves further investigation.

    • Isabella N. Lopez on

      Agreed, the details around the scope and impact of these operations would be valuable to understand. Uncovering the full extent of Russian interference in African information spaces is crucial to countering it.

  4. The Russians seem to have a knack for exploiting new technologies for their own propaganda goals. It’s disheartening to see them targeting African nations in this way. I hope the report’s findings lead to greater scrutiny and action to counter these influence operations.

    • Noah Rodriguez on

      Yes, it’s a disturbing pattern we’ve seen time and again. The international community needs to be proactive in developing robust safeguards against the malicious use of AI and other emerging technologies.

  5. Robert M. Jones on

    It’s disappointing, but not surprising, to see Russia exploiting advanced technologies like ChatGPT for malicious ends. Their continued efforts to sow division and undermine democratic institutions globally are a major threat. Vigilance and a coordinated response from the international community will be key.

    • Robert Smith on

      Absolutely. The weaponization of AI for propaganda and disinformation is a worrying new frontier that requires a robust and multilateral approach to address. Safeguarding the information landscape, especially in vulnerable regions, must be a priority.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.