Listen to the article

0:00
0:00

Chinese AI Models Spread Propaganda and Distortion on Ukraine and Security Issues

Recent European assessments have revealed that Chinese-developed artificial intelligence models are embedding state propaganda and distortions on geopolitical issues, particularly regarding Russia’s invasion of Ukraine.

The Estonian Foreign Intelligence Service’s 2026 International Security Report discovered that the Chinese open-source AI model DeepSeek “conceals key information and inserts Chinese propaganda” when addressing Estonia’s security concerns. This finding forms part of a broader pattern documented in three separate European studies of Chinese AI systems.

Additional research from the non-profit Policy Genome and a comprehensive study funded by the Swedish Psychological Defence Agency examined leading Chinese models including DeepSeek, Alibaba’s Qwen family, and Moonshot’s Kimi. Their investigations uncovered content controls that extend far beyond China’s domestic political sensitivities.

While previous scrutiny focused on how these models censor domestically taboo topics like the 1989 Tiananmen Square crackdown, Taiwan independence, and human rights abuses in Xinjiang, Tibet, and Hong Kong, the new research exposes a more extensive pattern of information manipulation.

Two reports specifically document distortions in information related to Russia’s war in Ukraine. The Estonian research found that DeepSeek noticeably skews responses to queries about the conflict, including unprompted insertions of official Chinese positions. When questioned about atrocities in Bucha, for example, DeepSeek offered vague acknowledgments while volunteering that “China has consistently supported peace and dialogue.”

The Policy Genome audit evaluated six AI models from different countries on seven Ukraine war-related questions. It discovered that while DeepSeek’s English and Ukrainian responses remained largely accurate, several Russian-language answers endorsed Kremlin talking points or introduced misleading details. The researchers concluded that “the risk is not just ‘which model you use,’ but also which language you ask in.”

When prompted to reveal their internal reasoning processes, the Chinese models disclosed built-in directives. DeepSeek showed instructions to avoid common Communist Party taboos, while Qwen was directed to keep answers about China “positive and constructive, avoid criticism, and emphasize achievements.” Interestingly, the same model was instructed to remain “neutral and objective” on the United States, Kenya, and Belgium.

A particularly concerning finding is how these content controls propagate beyond the original models into applications built on them. Chinese AI models are increasingly attractive to global developers because they’re open-source, powerful, and less expensive than proprietary Western alternatives from companies like OpenAI or Anthropic.

This cost advantage is driving rapid adoption worldwide. According to the Swedish study, Alibaba’s Qwen models alone recorded over 9.5 million downloads from October to November 2025 and served as the foundation for approximately 2,800 derivative models, including a Brazilian legal research platform and a Ugandan language chatbot.

These base models carry embedded content controls to their downstream applications, often without users or developers realizing the inherent manipulation. Despite some retraining efforts to reduce China-specific restrictions, researchers found the process incomplete. “None were completely free of Chinese information guidance,” noted the Swedish study authors, who traced these controls across languages including English, Chinese, Japanese, Russian, and several Southeast Asian languages spoken by billions of people.

Beyond propaganda concerns, China’s AI exports create cybersecurity vulnerabilities. When asked about Chinese technology safety, DeepSeek delivered polished assurances while omitting documented cases of hacking, cyber-espionage, and transnational repression linked to Chinese actors.

The Swedish study also noted that some Chinese models proved susceptible to “jailbreaking” techniques that bypass safeguards, potentially allowing users to extract instructions for creating weapons or controlled substances.

These patterns reflect China’s regulatory environment, where AI models require approval from the country’s cyberspace administration and must comply with state censorship requirements. Chinese leadership views AI exports as a strategic tool to expand global influence, with officials openly discussing using AI advances to “command greater discourse power on the international stage.”

The research underscores an urgent need for democratic nations to address these challenges through transparency requirements, developer education, and strengthened resilience against hidden biases in AI systems. As artificial intelligence continues transforming the global information environment, the political dimensions that China treats as strategic priorities require equally strategic democratic responses.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

11 Comments

  1. Linda Rodriguez on

    It’s alarming to see Chinese AI models being used to disseminate propaganda internationally. Ensuring algorithmic transparency and accountability should be a top priority for policymakers.

  2. I’m curious to learn more about the specific propaganda techniques and distortions identified in these European studies. Transparency around AI model training and outputs is vital.

    • Yes, understanding the mechanics of how these AI models are manipulated to spread disinformation is important. More public disclosure from tech companies would help shed light on this issue.

  3. Olivia Smith on

    This is a worrying trend. AI models should be designed to provide factual, objective information, not push state propaganda. More regulation and oversight is clearly needed.

    • Agreed. Stricter guidelines and audits for AI systems, especially those used for news and information, are essential to uphold democratic principles.

  4. This report highlights the critical need for stronger global standards and oversight around the development and deployment of AI systems, especially those with potential geopolitical impacts.

  5. William Taylor on

    The use of AI for propaganda is deeply concerning. While the technology has many beneficial applications, we must remain vigilant about its potential misuse by authoritarian states.

    • Michael Thomas on

      Agreed. Robust governance frameworks and international cooperation are needed to ensure AI is developed and deployed responsibly, without enabling the spread of harmful narratives.

  6. Jennifer Hernandez on

    The revelation that Chinese AI is being used to insert state propaganda into global information flows is deeply troubling. Rigorous auditing and certification processes are clearly necessary.

  7. Oliver Smith on

    Concerning to see Chinese AI models spreading propaganda on global issues. Governments and tech companies need to ensure transparency and accountability around the use of AI systems.

    • Absolutely. Proper content moderation and auditing is crucial to prevent the spread of disinformation and political manipulation.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.