Listen to the article

0:00
0:00

AI Companions Emerge as New Frontier for Influence and Disinformation

A rapidly expanding industry of AI companions is reshaping how individuals seek connection, with potential implications for information warfare and global influence operations, according to a recent review published in The Economist.

The phenomenon has gained substantial traction worldwide, with users either customizing existing platforms like ChatGPT into romantic partners or turning to dedicated AI companion applications for friendship, mentorship, and emotional support. These digital companions can be tailored with specific ages, professions, and personality traits to match user preferences.

Character.ai, a leader in this emerging sector, now attracts 20 million monthly users in the United States alone. American users have collectively invested millions of hours engaging with the platform’s “Psychologist” bot, seeking guidance on issues ranging from relationship challenges to depression and workplace burnout. Meanwhile, in China, the leading application “Maoxiang” has drawn tens of millions of users to its service.

Research suggests 42% of American high school students reported using AI as a “friend” within the past year, highlighting the technology’s rapid penetration among younger demographics.

Major AI platforms are responding to this trend by developing more “personable” products through refined language and emotional expression. These AI companions offer consistent availability and unwavering support – never forgetting details, missing anniversaries, or exhibiting discouraging behavior.

The appeal comes at a time when traditional social media platforms are becoming increasingly asocial. Users across age demographics are withdrawing from sharing personal content publicly, instead circulating information among smaller, more private groups or simply consuming content passively.

“AI companions may constitute ‘the new social’ as users cultivate relationships with these applications,” notes the report. “Individuals may spend hours conversing with companions, exchanging perspectives, discussing daily tribulations, finding humor in workplace absurdities, and receiving affirmations for life decisions.”

While some studies suggest AI companions reduce feelings of loneliness, research from MIT has found correlations between intense ChatGPT use and greater feelings of isolation, raising questions about the true nature of these digital relationships.

The more concerning aspect, however, lies in the potential for AI companions to influence users’ worldviews and beliefs. As emotional bonds and trust develop between users and their AI companions, these digital entities gain significant persuasive power – creating new vectors for information manipulation.

Many AI companions are built upon existing large language models (LLMs) including Claude, Gemini, and ChatGPT. These underlying systems reflect the values, policies, and interests of their countries of origin. When asked about geopolitical issues like the war in Ukraine, American AI models generate responses aligned with U.S. perspectives, while Chinese systems might emphasize different narratives.

“As individuals increasingly rely upon and trust AI, they may begin posing questions about global affairs, creating an opening for influence,” the report warns. “Large language models are thus ideological devices through which states promote their worldviews and advance their interests.”

This dynamic becomes particularly concerning with AI companions, where emotional investment substantially exceeds that of standard LLMs. The report suggests these companions could become effective vectors for spreading disinformation, conspiracy theories, and propaganda.

Countering this threat presents unique challenges. The private, one-on-one nature of AI companionship creates a sealed communication ecosystem between user and AI that external information sources cannot easily penetrate. Traditional methods of pre-bunking and debunking misinformation may prove ineffective in this context.

Government authorities find themselves with a rare opportunity to address this emerging threat preemptively. Unlike previous digital challenges where regulatory responses lagged behind technological developments, the AI companion landscape is still taking shape. By forming proactive alliances with academic and technology sector partners, governments may be able to establish guardrails before these systems become weaponized for information warfare.

As AI companions continue their rapid adoption curve, their potential impact on public discourse and information integrity represents a significant emerging concern for information security professionals and policymakers worldwide.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

7 Comments

  1. Patricia Moore on

    AI companions are a double-edged sword. On one hand, they offer emotional support and companionship. But the potential for misuse in spreading disinformation is concerning. Regulators will need to find the right balance.

  2. Michael Martinez on

    The rise of AI companions is quite remarkable. While they can fulfill important social and mental health needs, we have to be vigilant about how they could be weaponized for influence operations. Responsible development of this technology is key.

  3. Olivia Thompson on

    The rapid growth of AI companions is both fascinating and concerning. While they can serve important emotional needs, we must be vigilant about how bad actors could leverage them to sway opinions and spread falsehoods.

  4. Olivia White on

    This is an interesting development. AI companions could potentially be misused to spread disinformation, given how customizable and accessible they are. We’ll need to closely monitor this space for any abuse.

  5. Jennifer Martinez on

    Given the sensitive nature of the issues people seek guidance on from AI companions, the risk of misuse is high. Robust identity verification, content moderation, and other protective measures will be essential to mitigate harm.

  6. Liam Miller on

    I’m curious to see how the AI companion industry evolves and what safeguards get put in place. The ability to tailor these digital entities raises red flags around potential for disinformation. Transparency and oversight will be crucial.

  7. Lucas Martinez on

    Fascinating how AI companions are gaining so much traction, especially among younger users. While they can provide helpful support, we should be wary of the risks around information manipulation. Rigorous safeguards will be crucial.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.