Listen to the article

0:00
0:00

AI Supercharges Chinese Influence Operations, Experts Warn

Generative AI technology is dramatically enhancing China’s ability to conduct influence operations abroad, according to newly leaked documents that reveal sophisticated efforts targeting Taiwan, Hong Kong, and potentially the United States.

“We are seeing now an ability to both develop and deliver at an efficiency, at a speed, at a scale that we’ve never seen before,” warned Gen. Paul Nakasone, former head of the NSA and now director of the Vanderbilt Institute, during remarks at this month’s DEF CON hacker conference.

This concerning development comes as the U.S. government scales back efforts to combat foreign influence campaigns, creating a potential security gap as AI-powered disinformation becomes increasingly sophisticated.

According to internal documents obtained by Vanderbilt University’s Institute of National Security, Chinese tech company GoLaxy appears to be leveraging generative AI to create convincing influence operations targeting Taiwan and Hong Kong. The company allegedly uses DeepSeek’s open-source reasoning model to mine social media profiles and generate content that “feels authentic, adapts in real-time and avoids detection.”

The documents suggest GoLaxy deployed synthetic personas that dynamically adapt their messaging to specific audiences and can mimic real individuals. These personas were reportedly deployed during Taiwan’s 2024 election and to counter opposition to Hong Kong’s 2020 national security law, which effectively ended the city’s autonomy.

Perhaps most alarming, GoLaxy has allegedly created detailed profiles for at least 117 members of Congress and over 2,000 American political figures and thought leaders. While GoLaxy has denied these claims, and the documents haven’t been publicly released for independent verification, security experts view the allegations as consistent with evolving threats.

“This is a whole new level of gray zone conflict,” said Brett Goldstein, a Vanderbilt researcher and former director of the Defense Digital Service. “We need to figure out how to get ahead of it.”

China and Russia have long invested in influence operations, with Russia traditionally demonstrating greater success beyond its borders through tactics like bot farms and media infiltration. China’s previous efforts often struggled to generate engagement internationally, partly due to less sophisticated approaches.

Generative AI appears to be changing that equation dramatically, making it easier for China to produce believable, engaging content that overcomes previous language barriers that might have revealed foreign origins.

“Generative AI is definitely bringing down that cost of entry enough that a lot more firms are able to provide these types of services,” explained C. Shawn Eib, head of investigations at disinformation detection firm Alethea.

Security experts believe GoLaxy represents just the beginning of this trend. Foreign adversaries began experimenting with ChatGPT and similar technologies immediately after their public release. Pro-Russian propaganda groups are already using AI to mimic legitimate Western news outlets like ABC and Politico, while China continues working with third-party contractors for cyberattacks against the United States.

These technological advancements coincide with the Trump administration’s dismantling of key offices designed to counter foreign disinformation. The Cybersecurity and Infrastructure Security Agency (CISA), FBI, and State Department have each reduced their capabilities for collaborating with the private sector to combat foreign influence operations.

With government resources diminishing, experts emphasize that private sector innovation will be crucial in detecting and countering AI-generated disinformation.

“This is what the private sector has got to help us with going into the future,” Nakasone stressed. “We need a novel approach where we can still innovate and yet be ahead of the threat.”

Regulatory pressure, particularly from European countries, may provide additional incentives for social media platforms to rebuild trust and safety teams that can identify AI-generated disinformation. Without such efforts, the sophistication gap between offensive capabilities and defensive measures could continue to widen, potentially threatening democratic discourse and national security.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

12 Comments

  1. Interesting update on Leaked Documents Reveal AI’s Role in Amplifying Disinformation. Curious how the grades will trend next quarter.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.