Listen to the article
Russian Media Network “Rybar” Exposed in OpenAI’s Security Report for AI Misuse
OpenAI has uncovered a Russian influence operation that leveraged its artificial intelligence systems to generate propaganda and plan covert influence campaigns across multiple countries. The details were revealed in OpenAI’s February 2026 security update titled “Disrupting malicious uses of our models.”
The operation, code-named “Fish Food,” was linked to the Russian media network Rybar, which maintains a significant online presence with approximately 1.4 million subscribers on its main Russian-language Telegram channel. OpenAI’s investigation resulted in the banning of several ChatGPT accounts connected to the network.
According to the report, the accounts, which likely originated in Russia, generated content that was subsequently distributed through both official Rybar-branded channels on Telegram and X (formerly Twitter) and through a broader network of seemingly unaffiliated social media profiles. This distribution strategy allowed the content to reach audiences without transparent disclosure of its Russian origins.
The investigation revealed a sophisticated approach to content production. In one documented instance, a single ChatGPT prompt generated seven distinct tweets, six of which were later published on X by different accounts in the network. The content was produced in multiple languages, including Russian, English, and Spanish, suggesting an international target audience for the influence campaign.
Perhaps most concerning were attempts to use OpenAI’s systems to draft commercial proposals for expanding Rybar’s covert influence operations into Africa. These proposals outlined comprehensive plans including the creation and management of social media accounts, the launch of a bilingual “investigative journalism” website focused on African issues, placement of paid content in French-language media outlets, and the development of a network of amplifier accounts.
One of the proposals explicitly detailed an estimated annual budget of up to $600,000 for these operations, indicating the scale and financial backing behind Rybar’s influence efforts.
The content generated through OpenAI’s systems consistently aligned with narratives typically associated with Russian influence campaigns. These included positive portrayals of Russia and its allies, criticism of Ukraine, and allegations of Western interference in other countries’ affairs—themes that have been common in Russian information operations since the full-scale invasion of Ukraine in 2022.
Media analysts note that this case represents an evolution in how state-affiliated influence operations are attempting to harness advanced AI systems for propaganda purposes. The use of AI allows such operations to scale content production efficiently, customize messaging for different audiences, and potentially bypass traditional detection methods.
While OpenAI reported that it did not observe the AI-generated content being amplified by major mainstream media outlets, the operation’s multilingual approach and significant follower base on Telegram present concerns about its potential reach and impact.
This revelation comes amid growing global concerns about the misuse of generative AI technologies for disinformation and propaganda. In recent years, technology companies, governments, and international organizations have struggled to establish effective guardrails and detection systems to prevent such abuses.
OpenAI emphasized its commitment to continuing efforts to detect and disrupt malicious uses of its tools. The company stated it is working collaboratively with industry partners to counter influence operations and fraud-related activities, though specific details about these partnerships were not disclosed in the report.
Cybersecurity experts suggest this case highlights the ongoing cat-and-mouse game between AI developers and those seeking to exploit these technologies for propaganda purposes, underscoring the need for robust monitoring systems and clear policies regarding AI misuse.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
Rybar’s use of ChatGPT to generate propaganda across social media is a disturbing example of how AI can be weaponized. Kudos to OpenAI for detecting and disrupting this, but the broader challenge of combating disinformation online remains daunting.
Absolutely. This case highlights the importance of transparency, accountability, and proactive security measures when it comes to large language models. Ongoing vigilance will be critical.
While it’s good that OpenAI was able to detect and disrupt this Russian propaganda operation, the broader implications of this story are quite troubling. The potential for AI to be misused for disinformation campaigns is a serious threat that requires vigilance.
Well said. This is a wake-up call for the entire AI community to redouble efforts in developing robust security measures and ethical frameworks to mitigate the risks of large language model abuse.
This is a concerning report on OpenAI’s uncovering of a Russian propaganda operation using their language models. It’s good they took action to shut down the offending accounts, but it highlights the potential for AI misuse that needs to be carefully monitored.
Agreed, the scale and sophistication of this operation is alarming. Responsible AI development and deployment has to include robust safeguards against malicious use.
The fact that a Russian media network was able to leverage ChatGPT to generate propaganda at scale is deeply concerning. It underscores the urgent need for stronger safeguards and oversight when it comes to the deployment of large language models.
Absolutely. This case highlights the importance of continued research and innovation to stay ahead of bad actors who would seek to exploit these powerful technologies for nefarious purposes.
It’s concerning to see how easily AI tools can be exploited for propaganda purposes. While OpenAI took the right steps here, this is likely just the tip of the iceberg when it comes to the potential misuse of these technologies.
Well said. The battle against disinformation powered by AI will be an ongoing challenge that requires collaboration between tech companies, governments, and civil society.
This news about the Russian propaganda operation using ChatGPT is a sobering reminder of the dual-use nature of AI. While the technology holds great promise, it must be carefully managed to prevent malicious actors from abusing it.
Agreed. Responsible AI development requires a multipronged approach – robust security measures, transparency, and active monitoring to stay ahead of potential misuse.