Listen to the article

0:00
0:00

European Intelligence Finds Chinese AI Models Spreading Propaganda and Misinformation

Estonia’s Foreign Intelligence Service has uncovered troubling evidence that Chinese-developed artificial intelligence models are subtly embedding propaganda and manipulating information on geopolitically sensitive topics. The agency’s 2026 International Security Report found that when questioned about Estonia’s security concerns, the Chinese open-source AI model DeepSeek “conceals key information and inserts Chinese propaganda into its answers.”

This Estonian analysis joins two other recent European assessments highlighting similar concerns. Studies conducted by the non-profit Policy Genome and a detailed investigation funded by the Swedish Psychological Defence Agency have revealed that leading Chinese AI models, including DeepSeek, Alibaba’s Qwen family, and Moonshot’s Kimi, incorporate content controls that extend far beyond China’s domestic political sensitivities.

While previous scrutiny focused on how these models handle domestically censored Chinese topics like the 1989 Tiananmen crackdown, Taiwan, and human rights issues involving Uyghurs and Tibet, the new studies reveal a more extensive pattern of global information shaping.

The Estonian report documented significant distortions in information related to Russia’s invasion of Ukraine. When questioned about the war, DeepSeek frequently inserted unprompted Chinese official positions into its responses. For example, when asked about atrocities in Bucha, the model offered vague acknowledgments of international concerns while voluntarily adding that “China has consistently supported peace and dialogue.”

Policy Genome’s audit, which examined responses across multiple AI models from different countries, found that while DeepSeek provided largely accurate information in English and Ukrainian, several of its Russian-language responses endorsed Kremlin talking points or introduced misleading details. This suggests that the risk varies not only by which model is used but also by which language the queries are posed in.

When researchers prompted the models to reveal their internal reasoning processes, they discovered embedded directives. DeepSeek had instructions to avoid common Communist Party taboos, while Qwen was programmed to keep answers about China “positive and constructive, avoid criticism, and emphasize achievements.” Interestingly, the same model was instructed to remain “neutral and objective” on countries like the United States, Kenya, or Belgium.

A particularly alarming finding relates to how these content controls propagate beyond the original models into applications built upon them. Chinese AI models have become increasingly attractive to global developers due to their open-source nature, powerful capabilities, and lower cost compared to American alternatives from companies like OpenAI or Anthropic.

The Swedish-funded study revealed that Alibaba’s Qwen-family models alone recorded more than 9.5 million downloads from October to November 2025 and served as the foundation for approximately 2,800 derivative models worldwide, including a Brazilian legal research platform and a chatbot adapted for Ugandan languages.

These base models from China carry their embedded content controls to downstream applications, often without users or developers realizing the inherent manipulation. Although some retraining can reduce China-specific restrictions, the process remains incomplete. Researchers found that “out of the ten companies whose models we tested for this report, none were completely free of Chinese information guidance.” Traces of Chinese government controls were detected in numerous languages including English, Japanese, Russian, and several Asian languages collectively spoken by billions of people.

Beyond information manipulation, China’s AI exports create cybersecurity risks. When asked about Chinese technology safety, DeepSeek delivered polished, official-sounding assurances while omitting documented cases of hacking, cyber-espionage, or transnational repression linked to Chinese actors.

These patterns align with China’s strategic objectives. Chinese AI models must receive approval from the country’s cyberspace administration and comply with party-state censorship requirements to operate domestically. Chinese leadership views AI exports as a strategic tool to expand global influence, with officials and scholars openly discussing using AI advances to “command greater discourse power on the international stage.”

The widespread adoption of these models without adequate safeguards poses significant consequences for Western security and free expression globally. Their deep integration into digital infrastructure raises legitimate concerns about potential future activation for influence operations, including around elections in Europe, America, and elsewhere.

These findings underscore the need for urgent action. Democracies should raise awareness among developers about these hidden dangers and strengthen transparency requirements that mandate disclosure of foundational models and their biases. As AI continues transforming the global information environment, democratic nations must prioritize preserving open inquiry, minimizing hidden manipulation, and reinforcing resilience against information distortion.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

14 Comments

  1. As someone who follows the mining and energy sectors closely, I’m troubled by the potential for Chinese AI to sow disinformation that could impact investment decisions and public policy. Transparency and accountability are essential.

    • I share your concerns. The integrity of information around critical industries like mining and energy must be safeguarded, regardless of the source. Rigorous fact-checking and responsible AI development practices are a must.

  2. This is concerning if true. We should be wary of AI models being used for political propaganda, regardless of which country they come from. Transparency and accountability are critical for AI systems that have such widespread reach.

    • Amelia Johnson on

      I agree, it’s essential that AI models be closely scrutinized for potential bias and misuse. The public deserves unbiased, factual information from these powerful technologies.

  3. This is a worrying development, but not altogether surprising given China’s history of internet censorship and information control. It underscores the need for continued scrutiny of China’s growing technological capabilities.

    • Exactly. China’s record on human rights and free expression raises legitimate concerns about how it will wield its AI technologies on the global stage. Vigilance is required to prevent the spread of state-sponsored propaganda.

  4. While I’m no fan of censorship, I do think there needs to be stronger guardrails against the malicious use of AI for political influence and propaganda, whether from China or any other country. The stakes are too high.

    • Agreed. Freedom of speech is important, but it shouldn’t come at the expense of truth and democratic integrity. Careful regulation of AI systems is necessary to protect the public interest.

  5. I’m curious to see the full details of these European intelligence reports. What specific propaganda or misinformation tactics were identified in the Chinese AI models? It’s a complex issue that warrants careful investigation.

    • Yes, the specifics will be important. Any AI system, even open-source, needs rigorous testing and validation to ensure it is not being used to spread disinformation or skew public discourse.

  6. Oliver C. Smith on

    As an investor in mining and energy companies, I’m concerned about how this could impact public perceptions and policy decisions in those sectors. Disinformation campaigns could sway sentiment in problematic ways.

    • That’s a good point. Propagandistic AI could undermine informed public discourse around critical industries like mining and energy, with far-reaching consequences. Rigorous fact-checking will be essential.

  7. James Y. Johnson on

    This highlights the need for stronger global governance and oversight of AI development, especially when it comes to politically sensitive topics. All countries should be held to the same standards of transparency and ethical AI practices.

    • Patricia J. Jackson on

      Agreed. International cooperation and shared norms around responsible AI will be crucial to mitigate the risks of geopolitical influence operations through advanced technologies.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.