Listen to the article

0:00
0:00

OpenAI Blocks Russian Propaganda Network Using ChatGPT for Disinformation Campaigns

OpenAI has taken decisive action against a Russian propaganda network known as “Rybar” by blocking multiple ChatGPT accounts linked to the organization. The accounts were identified during a counter-intelligence operation dubbed “Fish Food,” which revealed an extensive operation leveraging AI tools to generate content for propaganda purposes.

According to a recent OpenAI report, the propaganda network utilized ChatGPT to create posts and comments that were subsequently distributed across various social media platforms. The content appeared under the official “Rybar” brand as well as through seemingly unrelated accounts designed to appear as authentic users from different regions around the world.

“We banned a cluster of ChatGPT accounts that were linked to the Russian-origin ‘Rybar’ network,” OpenAI stated in their report. “This cluster translated and generated content that was posted on ‘Rybar’ social media accounts, but it also appears to have served as a content farm for a wider network of accounts on X and Telegram that bore no overt relationship to the ‘Rybar’ group.”

The investigation found that the propaganda operatives frequently used ChatGPT to mass-produce batches of short social media comments. These comments were then disseminated by accounts on X and Telegram strategically designed to appear as though they originated from diverse geographical locations, creating an illusion of widespread organic engagement.

While the blocked users communicated in Russian, they produced multilingual propaganda materials in several languages including Spanish and English. The report detailed that these materials consistently “praised Russia and its allies (such as Belarus), criticised Ukraine and accused Western countries of foreign interference.”

The operation’s sophistication extended beyond text generation. Rybar operatives also utilized OpenAI’s advanced video generation tool, Sora, to create promotional videos supporting their messaging campaigns. This represents one of the first documented cases of AI-generated video being weaponized for state-backed propaganda efforts.

Perhaps most concerning was the discovery that one of the blocked accounts had requested creation of detailed information and psychological operation plans specifically designed to interfere with political processes and elections across several African nations. Evidence suggests the propagandists were establishing a network of agents and planning large-scale events and protests targeting the Democratic Republic of the Congo, Cameroon, Burundi, and Madagascar.

The scale of these operations appears significant, with the report noting that the budget for the largest propaganda operation in Africa amounted to US$600,000. This aligns with broader concerns about Russian influence operations on the continent, which have intensified in recent years as Moscow seeks to expand its geopolitical influence.

The “Rybar” network has been on Western intelligence agencies’ radar for some time. The OpenAI report noted that two years ago, the United States government offered a reward of up to US$10 million for information regarding the “Rybar” project, underscoring its significance as a threat to information integrity.

This development comes amid growing concerns about the misuse of generative AI technologies for disinformation campaigns. As AI tools become more sophisticated and accessible, their potential for weaponization in information warfare continues to present significant challenges for technology companies and national security agencies alike.

OpenAI’s action represents one of the most high-profile interventions by an AI company against state-affiliated propaganda networks, highlighting both the evolving nature of digital influence operations and the increasing responsibilities falling to private technology companies in combating them.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

14 Comments

  1. This is a significant development in the ongoing battle against state-sponsored disinformation. OpenAI’s actions to restrict access for the Russian propaganda network are commendable. The use of AI tools for such malicious purposes is deeply concerning and must be firmly addressed.

  2. Oliver Jackson on

    It’s good to see OpenAI taking action against this Russian disinformation network. Leveraging AI tools for propaganda is a worrying trend that needs to be addressed. Kudos to the counter-intelligence team for uncovering this operation.

    • Absolutely. Restricting access to these propagandists is an important step in curbing the spread of misinformation. AI should not be misused for malicious purposes.

  3. The revelation that this Russian propaganda network was leveraging ChatGPT to generate content is deeply troubling. I’m glad OpenAI has acted swiftly to block their access and disrupt these coordinated disinformation efforts. Vigilance against the misuse of AI is paramount.

    • Liam Hernandez on

      Agreed. OpenAI’s proactive stance in identifying and addressing this threat sets an important precedent. Maintaining the integrity of AI platforms is crucial to combating the spread of misinformation.

  4. The use of ChatGPT and other AI tools for coordinated propaganda campaigns is very concerning. I’m glad OpenAI is being proactive in identifying and blocking these types of accounts. Transparency and accountability are crucial.

    • Jennifer Miller on

      I agree. It’s crucial that AI platforms take a strong stance against being exploited for disinformation. Kudos to OpenAI for their vigilance in this matter.

  5. Patricia Jackson on

    It’s alarming to see how this Russian propaganda network was exploiting ChatGPT to generate content for their disinformation campaigns. I’m glad OpenAI has taken swift action to block their access and disrupt these activities. Continued vigilance against the misuse of AI is essential.

    • Absolutely. OpenAI’s response demonstrates a strong commitment to maintaining the integrity of their platform and preventing it from being weaponized for propaganda purposes. This is a step in the right direction.

  6. The revelation that a Russian propaganda network was using ChatGPT to create content for their disinformation campaigns is deeply troubling. I’m glad to see OpenAI taking decisive action to restrict their access and disrupt these activities. Protecting AI platforms from such malicious exploitation is critical.

  7. This is a concerning development, but I’m impressed by OpenAI’s swift response in blocking the Russian propaganda network’s access to ChatGPT. The use of AI tools for coordinated disinformation campaigns is a serious threat that needs to be addressed. Kudos to the counter-intelligence team for uncovering this operation.

    • Agreed. OpenAI’s actions set an important precedent and demonstrate their commitment to preventing the misuse of their technology for malicious propaganda purposes. Continued vigilance and cooperation across the industry will be crucial.

  8. It’s good that OpenAI has taken this step to restrict access for the Russian propaganda network. The use of AI tools for disinformation campaigns is a concerning trend that needs to be stopped. I hope other tech companies will follow suit in protecting against such misuse of their platforms.

  9. William B. Johnson on

    This is a worrying development, but I’m glad to see OpenAI taking decisive action. The Russian propaganda network’s use of AI-generated content to spread misinformation is a serious threat that needs to be addressed. Ongoing monitoring and enforcement will be key.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.