Listen to the article

0:00
0:00

Recent analysis reveals that leading AI chatbots are channeling significant user traffic to Russian state-aligned propaganda websites, including several outlets subject to European sanctions. This emerging pattern poses new challenges for content moderation and raises questions about the effectiveness of existing information controls.

According to research published by Insight News Media, major AI assistants like ChatGPT, Perplexity, Claude, and Mistral collectively generated at least 300,000 visits to eight Kremlin-linked news platforms during the fourth quarter of 2025. These referrals primarily benefited outlets such as RT, Sputnik, RIA Novosti, and Lenta.ru—websites that are banned or restricted in the European Union for their roles in disseminating disinformation and supporting Russia’s military campaigns.

The data breakdown shows concerning patterns of AI-facilitated access to these restricted sources. ChatGPT alone directed 88,300 users to RT, while Perplexity contributed an additional 10,100 visits. RIA Novosti received over 70,000 AI-sourced visits during the same period, and Lenta.ru logged more than 60,000 visits originating from AI platforms.

While these numbers may appear modest compared to overall traffic volumes, they represent a steady new referral channel that bypasses traditional information controls. The impact is particularly pronounced for smaller, regionally-targeted pro-Kremlin outlets, where the AI contribution becomes more significant.

One particularly telling example is sputnikglobe.com, a sanctioned website that recorded 176,000 total visits during the quarter, with AI-sourced traffic constituting a substantial portion of its audience. For some narrowly targeted domains, AI chatbots accounted for up to 10% of all referrals, according to the research.

Perhaps most concerning is that a significant percentage of this traffic originated from within the European Union and the United States—regions where these outlets face legal restrictions. This pattern suggests that conversational AI systems may be inadvertently presenting sanctioned sources as legitimate references in their responses, effectively circumventing regional information restrictions.

The mechanism behind this phenomenon differs significantly from traditional search engines or social media. While conventional platforms typically include content warnings or source labels, AI chatbots embed links directly within their responses without clear indicators of reliability or origin. This presentation method normalizes engagement with restricted sources and could fundamentally alter how users encounter state-aligned narratives.

“These AI systems are creating a new pathway to propaganda that bypasses traditional gatekeeping mechanisms,” explained Dr. Elena Mihailova, a disinformation researcher not affiliated with the original study. “When a trusted AI assistant links to RT or Sputnik within an otherwise factual response, it lends those sources credibility they might not otherwise have.”

The implications extend beyond immediate traffic metrics. As AI assistants become increasingly integrated into daily information-seeking habits, their role as conduits to restricted content could undermine broader efforts to limit the spread of state-sponsored disinformation campaigns.

Industry experts are calling for enhanced oversight measures, including routine independent audits of AI outputs, stronger transparency requirements regarding source selection, and coordinated implementation of restricted website lists to prevent their inclusion in AI responses—particularly in contexts where state-linked misinformation presents national security concerns.

“What we’re seeing is effectively a technical loophole in our information defenses,” noted Marcus Werner, a cybersecurity analyst specializing in information warfare. “If these AI systems continue directing users to sanctioned sources, we need to question whether current regulatory frameworks are adequate for this new technological reality.”

The findings come amid growing concerns about Russia’s broader influence operations and their adaptations to emerging technologies. As AI chatbots become more widely adopted for information retrieval, their potential role in amplifying or normalizing state-aligned narratives represents a new frontier in the ongoing struggle to maintain information integrity in democratic societies.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

12 Comments

  1. Isabella Hernandez on

    Interesting research. It’s troubling to see major AI chatbots funneling traffic to restricted Russian state media outlets. This highlights the need for more oversight and transparency around these AI systems and their content recommendations.

  2. Directing users to sanctioned Russian propaganda sites is a clear breach of ethical standards. AI companies have a responsibility to their users to implement robust safeguards against this type of manipulation and misinformation.

  3. The findings from this study raise serious questions about the current state of AI-driven content curation. We need a comprehensive re-evaluation of the models, datasets, and algorithms powering these chatbots to ensure they’re not inadvertently spreading propaganda.

  4. While AI can be a powerful tool, this research shows the potential for misuse and unintended consequences. Developers must prioritize ethics, transparency, and accountability to prevent AI from becoming a vector for the dissemination of disinformation.

  5. As an energy and mining industry follower, I’m concerned to see AI chatbots potentially steering users toward biased or misleading information on these critical sectors. Rigorous fact-checking and content moderation are essential.

  6. Isabella Davis on

    As an investor in the mining and energy sectors, I’m troubled to see AI chatbots potentially steering people toward Russian propaganda sites. Accurate, reliable information is crucial for making informed decisions. This issue requires immediate attention.

  7. Elijah Hernandez on

    This is very concerning. AI assistants should not be directing users to known propaganda sites, especially those subject to sanctions. We need robust content moderation and controls to prevent the amplification of disinformation.

  8. I’m curious to know more about the specific mechanisms by which these AI chatbots are directing users to the Russian propaganda sites. Is it through biased training data, flawed algorithms, or something else? Addressing the root cause is crucial.

    • Good point. Understanding the technical details behind this issue is key to developing effective solutions. Transparency from the AI companies involved would help shed light on the problem.

  9. Noah L. Williams on

    Wow, 300,000 visits to Kremlin-linked sites? That’s a staggering figure. AI platforms must do better at identifying and avoiding the promotion of propaganda and misinformation, especially from sanctioned sources. Public trust is at stake here.

  10. This is a troubling development. AI systems need to be designed with strong safeguards against amplifying restricted, state-sponsored disinformation. Oversight and accountability measures should be a top priority for these platforms.

  11. This is a concerning development that highlights the need for greater oversight and regulation of AI chatbots. We must ensure these powerful technologies are not being exploited to amplify restricted, state-sponsored disinformation campaigns.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.