Listen to the article
China, Russia and scam networks have incorporated ChatGPT into their disinformation and fraud operations, according to a new threat report released by OpenAI. The company revealed it has banned numerous accounts linked to state-affiliated actors and criminal organizations that misused its AI technology.
OpenAI’s investigation identified several coordinated campaigns where ChatGPT was integrated into larger operational workflows rather than being used as a standalone tool. The report emphasizes that these actors deploy AI as just one component in sophisticated influence and scam operations.
In one notable case, OpenAI traced accounts to Chinese law enforcement engaged in what the company termed “cyber special operations.” These users attempted to leverage ChatGPT for planning influence campaigns, mass-reporting political dissidents, and creating forged materials. When the AI refused certain requests due to safety guardrails, operators continued their efforts through alternative means, demonstrating their persistence and adaptability.
“These groups don’t rely exclusively on AI tools,” said a security expert familiar with the report. “They’re embedding them strategically within existing operations that include manual processes and other digital platforms.”
The investigation also uncovered a Cambodia-based romance scam network targeting young Indonesian men. The operation presented itself as a dating agency but was actually designed to facilitate financial fraud. Scammers employed a sophisticated approach that combined manual prompting with automated chatbots to maintain conversations with victims, gradually building trust before executing their schemes. OpenAI responded by removing all accounts associated with this operation.
Russia’s influence operations were also highlighted in the report. Accounts connected to the “Rybar” network, a channel known for its pro-Kremlin stance and military analysis, used ChatGPT to draft and translate content that was subsequently distributed across multiple social media platforms. OpenAI noted that the impact of these campaigns depended more on the network’s reach and coordination capabilities than on the AI-generated content itself.
The report paints a concerning picture of how state and criminal actors across China, Russia, and parts of Southeast Asia are incorporating AI tools into their existing arsenals. These operations typically involve a combination of fake profiles, paid advertising, and forged documents, with AI serving as an efficiency multiplier rather than a replacement for traditional tactics.
“What we’re seeing is the evolution of influence operations in the AI era,” said a cybersecurity researcher who requested anonymity. “These actors are quick to adopt new technologies while maintaining their fundamental playbooks.”
The use of AI in disinformation campaigns comes amid growing global concern about the potential for misuse of generative AI technologies. Several countries have begun developing regulatory frameworks to address these risks, while tech companies race to implement safeguards against abuse.
OpenAI emphasized the need for cross-industry vigilance and cooperation to combat these threats. The company advocated for a holistic approach to security that examines behavioral patterns across platforms rather than focusing solely on content. This approach recognizes that sophisticated actors typically operate across multiple services and adapt quickly when blocked on any single platform.
“Identifying and disrupting these operations requires looking beyond individual pieces of content to understand the broader patterns and infrastructure supporting them,” the report stated.
The findings underscore the ongoing cat-and-mouse game between AI developers implementing safety measures and determined actors seeking to circumvent them. As AI capabilities continue to advance, security experts anticipate that both defensive and offensive tactics will grow increasingly sophisticated.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


5 Comments
The revelations about Chinese law enforcement using ChatGPT for influence operations and political targeting are deeply troubling. This underscores the need for stronger regulations and transparency around the development and deployment of AI technologies.
Agreed, the misuse of AI for state-sponsored cyber attacks and propaganda is a major concern. Proactive measures to mitigate these risks should be a top priority for policymakers and tech companies.
While AI can bring many benefits, the potential for abuse by bad actors is clear. I’m glad OpenAI is taking steps to identify and shut down these malicious networks. Continued vigilance and collaboration will be essential to stay ahead of evolving threats.
It’s concerning to see state-backed actors and criminal networks misusing AI technology for disinformation and scams. OpenAI’s efforts to identify and ban these abusive accounts are commendable. We need to stay vigilant and work to prevent the exploitation of AI for malicious purposes.
You’re right, the adaptability of these bad actors is worrying. Integrating AI into their workflows shows how sophisticated their operations can be. Ongoing monitoring and robust safeguards are crucial to limit the damage from these coordinated campaigns.