Listen to the article
OpenAI Exposes Russian Media Network’s AI Misuse in Operation Fish Food
OpenAI has uncovered systematic misuse of its artificial intelligence models by Russian media organization Rybar, according to a new security report released by the company. The investigation, dubbed “Operation Fish Food,” revealed how the network leveraged ChatGPT to generate content for influence operations across multiple languages and platforms.
“We have blocked a number of ChatGPT accounts associated with the Rybar network,” OpenAI stated in its report. “The network generated content that was published online, sometimes by Rybar-branded accounts, and sometimes by social media accounts that had no official connection to it.”
Investigators found that Russian operatives primarily used the AI tools to create social media content in various languages including Russian, English, and Spanish. The generated material was subsequently distributed across Rybar’s branded social media pages and its main website. In some instances, users also created promotional videos using OpenAI’s Sora video generation tool.
A particularly concerning pattern emerged where ChatGPT was employed to generate comments in English that were then posted across social media accounts designed to appear unaffiliated with Rybar. This tactic allowed the network to create an artificial impression of widespread support for particular viewpoints while concealing the true source of the messaging.
The operation’s scope extended beyond simple content generation. Cybersecurity researchers discovered evidence of more sophisticated planning, including requests to ChatGPT to draft business plans for covert interference campaigns in Africa under the name “The Fisherman.” In one case, an account asked for assistance editing a proposal specifically targeting election interference, likely in an African nation.
These prompts outlined both digital and physical strategies, including building networks of local agents and organizing large-scale events. The investigation uncovered plans targeting several African nations, including the Democratic Republic of Congo, Burundi, Cameroon, and Madagascar. One particularly troubling proposal suggested inciting on-the-ground protests in Madagascar.
The financial scale of these operations appears significant, with the most ambitious project carrying an estimated annual budget of up to $600,000.
“The content generated by this operation was typical of Russia’s covert influence operations over the years,” OpenAI researchers noted. “It typically praised Russia and its allies (such as Belarus), criticized Ukraine, and accused Western countries of foreign interference.”
This revelation comes amid heightened international scrutiny of Russian influence operations. Earlier this year, the United States announced a reward of up to $10 million for information about the Russian project known as “Fisherman.” According to the U.S. State Department, the media outlet was previously financed by Yevgeny Prigozhin, the late owner of the Wagner Private Military Company. More recently, “Fisherman” has reportedly received funding from Rostec, a Russian defense-industrial organization currently under U.S. sanctions.
The exposure of Operation Fish Food highlights growing concerns about how advanced AI systems can be weaponized for disinformation campaigns and electoral interference. It also demonstrates the challenges technology companies face in preventing misuse of their platforms while maintaining access for legitimate users.
As AI tools become increasingly sophisticated and accessible, the potential for their exploitation in information warfare continues to rise. OpenAI’s investigation offers a rare glimpse into the specific tactics employed by state-affiliated actors to leverage these technologies for geopolitical advantage.
The company did not detail what additional safeguards it might implement to prevent similar misuse in the future, but the public disclosure signals an increased commitment to transparency around these security challenges.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


20 Comments
The cost guidance is better than expected. If they deliver, the stock could rerate.
Uranium names keep pushing higher—supply still tight into 2026.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
If AISC keeps dropping, this becomes investable for me.
Production mix shifting toward Propaganda might help margins if metals stay firm.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
The cost guidance is better than expected. If they deliver, the stock could rerate.
Good point. Watching costs and grades closely.
Silver leverage is strong here; beta cuts both ways though.
Good point. Watching costs and grades closely.
The cost guidance is better than expected. If they deliver, the stock could rerate.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Nice to see insider buying—usually a good signal in this space.
Silver leverage is strong here; beta cuts both ways though.
Interesting update on OpenAI Blocks Russian Propaganda Network Using ChatGPT to Generate IPSO Plan. Curious how the grades will trend next quarter.
Good point. Watching costs and grades closely.
I like the balance sheet here—less leverage than peers.