Listen to the article

0:00
0:00

In a significant development at the intersection of artificial intelligence and geopolitical influence operations, OpenAI has taken action against a network of ChatGPT accounts linked to Rybar, a prominent pro-Kremlin media outlet. The accounts were reportedly using AI tools to generate social media content and develop proposals for covert influence campaigns targeting African nations.

According to a threat report published by OpenAI last week, the campaign—dubbed “Fish Food”—was essentially operating as a “content farm” that produced large volumes of multilingual material later distributed across Telegram and X (formerly Twitter). The company’s investigation determined that at least some of the banned accounts likely originated from Russia.

“In essence, the ChatGPT activity seemed to serve as a content farm for these accounts,” OpenAI researchers stated in their report, while acknowledging they couldn’t independently verify how the AI-generated content ultimately appeared online.

The operation primarily utilized Russian-language prompts to generate content in multiple languages, including English and Spanish. Some of these prompts were specifically designed to create batches of English-language comments that would later be posted by various Telegram and X accounts with no declared association with Rybar, whose name translates to “fisherman” in Russian.

Beyond simple content generation, the operation had more ambitious aims. OpenAI researchers discovered that users had requested help developing commercial plans for covert interference campaigns across Africa on Rybar’s behalf. These plans included managing social media accounts, launching a bilingual investigative journalism website focused on African issues, and arranging paid content placements in French-language media outlets.

One particularly concerning prompt involved editing a proposal for what appeared to be an election interference team. The plan included strategies for building local networks and organizing large-scale events alongside online influence efforts. Other prompts sought information about electoral processes in Burundi and Cameroon, while also discussing campaign options in Madagascar—including suggestions aimed at inflaming protests. According to the report, the most ambitious project outlined in these prompts envisioned an annual budget of up to $600,000.

“The content generated by this operation was typical of covert Russian influence operations over the years,” the researchers noted. “It typically praised Russia and its allies, such as Belarus, criticized Ukraine, and accused Western countries of foreign interference.”

Rybar maintains a significant online presence, with its main Telegram channel boasting approximately 1.4 million subscribers. Despite these sizable audiences, OpenAI indicated they found no evidence that the network’s content had been significantly amplified by mainstream media outlets or that any matching on-the-ground activities had materialized in Africa.

Founded by former Russian Defense Ministry press officer Mikhail Zvinchuk and his associate Denis Shchukin, Rybar has established itself as an influential pro-war military blog. Russian independent media outlets have reported that the late Yevgeny Prigozhin, founder of the Wagner private military company who died in a plane crash in August 2023, was allegedly involved in financing the project.

This case highlights the evolving nature of information operations in the digital age, where AI tools can be leveraged to scale disinformation and influence campaigns. It also underscores the growing focus on Africa as a theater for geopolitical competition, where various powers seek to shape narratives and potentially influence political outcomes.

The discovery comes amid heightened concerns about AI’s role in amplifying disinformation and its potential to be weaponized for political purposes, particularly as numerous countries across Africa prepare for elections in the coming years. OpenAI’s actions represent one of the first publicly disclosed instances of an AI company taking direct action against accounts linked to state-affiliated influence operations.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. Elizabeth Moore on

    This is a stark reminder that the rapid advancements in AI bring both immense potential and serious risks. Ongoing efforts to ensure ethical and accountable AI development will be critical going forward.

  2. Lucas Taylor on

    Concerning to see Russian disinformation networks employing AI tools like ChatGPT to manipulate public discourse in Africa. We need to be vigilant about the spread of this kind of covert influence operation.

  3. Isabella Moore on

    While the potential for AI assistants to be misused for malicious purposes is clear, I hope this incident will also spur constructive discussions around the responsible development and deployment of these technologies.

  4. James D. Brown on

    While the use of AI for propaganda is concerning, I’m hopeful that increased scrutiny and regulation of these technologies can help curb their misuse and steer them towards more positive societal impacts.

  5. Jennifer E. Lee on

    Using AI-generated content to create large volumes of multilingual material for influence campaigns is a worrying trend. Fact-checking and digital literacy efforts will be crucial to counter these sorts of propaganda tactics.

    • Isabella S. White on

      Absolutely. Increased transparency and scrutiny around the use of AI in influence ops is essential for maintaining an informed and engaged public.

  6. Elizabeth Taylor on

    It’s disheartening to see advanced AI being leveraged for disinformation campaigns. Multilateral cooperation and robust content moderation policies will be key to mitigating the spread of this kind of propaganda.

  7. Olivia Davis on

    Kudos to OpenAI for taking action against this network. Staying vigilant and proactive in identifying and disrupting AI-powered influence operations should be a top priority for tech leaders and policymakers.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.