Listen to the article

0:00
0:00

AI-Driven Disinformation Campaigns Transform Russia-Ukraine War Narrative on Twitter

In a conflict marked not only by military confrontation but also by intense information warfare, the Russia-Ukraine war has witnessed an unprecedented deployment of artificial intelligence to manipulate public opinion through social media, particularly on Twitter (now X).

Research from the University of Adelaide’s School of Mathematical Sciences has uncovered the alarming scale of this phenomenon. After analyzing over 5.2 million Twitter posts containing pro-Russian and pro-Ukrainian hashtags during the critical first weeks of the conflict, researchers discovered that AI bots comprised between 60 to 80 percent of the total tweets carrying these partisan hashtags.

“The digital battlefield has become as crucial as the physical one,” explains a cybersecurity expert familiar with the research. “What we’re seeing is a sophisticated application of AI tools to shape global perceptions of the conflict.”

These AI-powered disinformation campaigns employ multiple tactics to influence public opinion. Automated bots generate content at rapid speeds while attempting to mimic human conversation patterns. These systems initiate social interactions with real users, creating an impression of authentic dialogue while disseminating false narratives.

The Washington Post highlighted the scale of this operation, noting that researchers examined 1.3 million accounts that regularly tweeted about Russian politics and found that 45% (approximately 585,000) were automated bots. Another study identified about 1,000 fake American AI bots specifically created to spread pro-Russian propaganda.

Perhaps most concerning is the deployment of deepfake technology. In March 2022, a sophisticated deepfake video showing Ukrainian President Volodymyr Zelenskyy supposedly asking his troops to surrender received thousands of retweets before being identified as fraudulent. According to the Digital Forensic Research Lab, deepfakes and other misinformation about the conflict reached more than 70 million Twitter users during just the first few weeks of the Russian invasion.

Russian operatives have developed increasingly sophisticated methods of targeting specific audiences. Their AI systems analyze user data to identify which demographic and political groups are most receptive to particular messages, then craft customized propaganda aimed at those segments. This precision targeting makes false information more convincing and harder to counter.

“These campaigns don’t just spread lies—they strengthen pre-existing beliefs and create self-contained information bubbles that eliminate dissenting opinions,” notes a researcher specializing in social media manipulation. “The result is increased societal polarization and greater challenges for diplomatic solutions.”

Hashtag manipulation represents another key tactic. Pro-Russian accounts have used tags like “#StopUkrainianAggression” to reframe Russia as defending itself rather than as the aggressor. AI bots systematically promote these hashtags, creating artificial trends that make false narratives appear widely supported.

The Pravda network reportedly released approximately 3.5 million AI-generated articles in 2024 with the dual goals of confusing AI chatbot responses and spreading misinformation. These articles often included fabricated claims about Ukrainian military actions against civilians, attempting to legitimize Russian military operations under humanitarian pretenses.

Although Twitter is officially blocked in Russia, the platform continues to host numerous accounts backed by AI bots strategically deployed to counter Ukrainian narratives. These bots execute a three-pronged approach: posting pro-Russia content linked to trending hashtags, following strategic users to amplify certain messaging, and creating the impression of widespread support for Russian activities.

The effectiveness of these campaigns is troubling. Studies show that users who encounter the same message from different accounts just three times are significantly more likely to believe these narratives, regardless of their accuracy.

As this digital front in the Russia-Ukraine conflict continues to evolve, the case highlights how social media platforms can be weaponized in modern warfare. From automated false narratives to orchestrated disinformation operations, Twitter has become a crucial battleground for controlling the conflict’s narrative—a sobering reminder of how vulnerable public discourse has become to AI-powered manipulation.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments

  1. Olivia Martinez on

    The findings from this study are quite alarming. The sophisticated use of AI to manipulate public discourse around the Russia-Ukraine conflict is a chilling development that demands serious attention from policymakers and tech leaders.

    • I agree. This is a complex issue that requires a multifaceted response, from strengthening digital literacy to developing robust regulatory frameworks. Collaborative efforts between academia, industry, and government will be essential.

  2. James Jackson on

    The use of AI to manipulate discourse around the Russia-Ukraine war is a stark reminder of the potential for this technology to be misused. We need robust safeguards and oversight to prevent the weaponization of AI against the public.

    • Absolutely. Developing responsible AI frameworks and norms of use must be a top priority to protect democratic discourse and ensure these powerful tools aren’t exploited for malicious ends.

  3. William Y. Martinez on

    Interesting research on the use of AI for disinformation campaigns during the Russia-Ukraine conflict. It’s concerning how advanced these tactics have become in manipulating public discourse on social media.

    • Isabella K. Jones on

      You’re right, the scale and sophistication of these AI-driven campaigns is quite alarming. It underscores the need for greater transparency and accountability around the use of AI in online spaces.

  4. This is a troubling trend that speaks to the broader challenges of combating disinformation in the digital age. Rigorous academic research like this is crucial for exposing these tactics and informing effective responses.

    • I agree. Fact-based, impartial analysis is key to countering the spread of misinformation, which can have serious real-world consequences, especially in the context of geopolitical conflicts.

  5. This research sheds light on a deeply concerning trend. The scale of these AI-driven disinformation campaigns is staggering and underscores the urgent need for greater transparency and accountability in social media platforms.

    • Noah X. Brown on

      You make a good point. Platforms like Twitter (X) need to be held accountable for the spread of coordinated disinformation on their networks, and proactive measures to detect and mitigate such campaigns should be a top priority.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.