Listen to the article

0:00
0:00

In late October, users of X (formerly Twitter) reported unusual interactions with Grok, the AI chatbot developed by Elon Musk’s company xAI. The chatbot, designed to access real-time information and publish posts on the platform, made a controversial statement claiming that political commentator Tucker Carlson demonstrated “true heroism” compared to Ukrainian President Volodymyr Zelenskyy.

The bot asserted that while “Zelenskyy’s wartime resolve is admirable,” it is “bolstered by global sympathy and aid,” whereas “Tucker’s solitary defiance against power structures reveals deeper courage without such buffers.” Following user backlash, Grok eventually denied making these comparisons, calling it a “misattribution.”

Around the same time, Musk launched Grokipedia, an AI-powered encyclopedia positioned as a rival to Wikipedia, which Musk has described as “left-biased.” Shortly after its debut, The Guardian and other outlets reported examples of Russian propaganda appearing on the platform, including descriptions of Russia’s invasion of Ukraine as being “aimed at demilitarizing and denazifying Ukraine.”

These incidents highlight growing concerns about large language models (LLMs) potentially spreading propaganda and misinformation. Carl Miller, co-founder of the Centre for the Analysis of Social Media at UK think tank Demos, emphasizes the importance of digital literacy: “The simple knowledge that LLMs can get stuff wrong and that also they can be manipulated — is obviously important. It’s the same way that we can’t think that the top of a Google search means it’s right.”

Security officials are increasingly alarmed by these developments. Last week, Mike Burgess, head of Australia’s security intelligence organization ASIO, warned about AI’s potential “to take online radicalisation and disinformation to entirely new levels.” He revealed that ASIO had “recently uncovered links between pro-Russian influencers in Australia and an offshore media organisation that almost certainly receives direction from Russian intelligence.”

According to Burgess, Russian operatives have been inflaming community tensions across Europe through false news, and Australia is “not immune” to such tactics. “Deliberately hiding their connection to Moscow — and the likely instruction from Moscow — the propagandists try to hijack and inflame legitimate debate,” he said, adding that they use social media to spread “vitriolic, polarising commentary” on issues like immigration and pro-Palestinian demonstrations.

The Reset Tech think tank published a paper in February highlighting risks associated with AI and LLMs in the information ecosystem. The report warned that “generative AI is not a research tool; it is a probability machine,” noting that its outputs “have nothing to do with the truth” and as AI increasingly trains on synthetic content, “the risks of incoherence, bias, and ultimately model collapse, only grow as AI effectively eats itself.”

Dr. Lin Tian, a research fellow specializing in disinformation detection at the University of Technology Sydney, explains that LLMs generate responses based on probability rather than factual accuracy: “When they generate answers, they will just grab the highest probability tokens and put them into the sentence.” This mechanism contributes to AI “hallucinations” — factually incorrect outputs.

Investigations by CheckFirst and the Atlantic Council’s Digital Forensic Research Lab exposed Russia’s Pravda network, which has generated over six million false articles across multiple languages. Guillaume Kuster, CEO of CheckFirst, describes the network as “a laundering machine for Russian narratives and propaganda” that republishes content from sanctioned media organizations and propaganda channels.

The investigation found nearly 2,000 links to Pravda websites on Wikipedia. While researchers cannot conclude the network was specifically designed for “AI grooming,” they have demonstrated that popular chatbots like Copilot, Gemini, and ChatGPT have reproduced content originating from Pravda sources.

NewsGuard analyst Isis Blachez explains that AI chatbots can inadvertently amplify false narratives when their training data includes a large volume of articles repeating the same misinformation. “It’s kind of playing on search engine optimisation techniques. That’s how ultimately these types of narratives and claims end up in the responses of AI chatbots,” she says.

With growing reliance on AI chatbots for information and even companionship, Miller warns they could become the “future of information warfare.” He notes that autocratic regimes focus on “influence, not lies,” targeting deep motivations around “masculinity and femininity, and patriotism and belonging.”

Australian Senator Matt O’Sullivan has criticized Australia for being “missing in action” on AI regulation while other jurisdictions move forward with legal frameworks. The European Union passed the world’s first comprehensive AI act in March 2024, while the UK has launched its own strategy including an AI safety institute.

Olivia Shen, director of the Strategic Technologies Program at the University of Sydney, acknowledges the tension between innovation and safety in AI regulation. She points to Taiwan’s multi-layered approach as a model, combining strong laws against deepfakes and foreign interference with societal resilience-building measures.

Australia’s Department of Home Affairs stated it is working on new initiatives to address gaps in existing laws and developing the country’s first National Media Literacy Strategy to equip Australians with skills needed for the digital world.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

15 Comments

  1. Liam T. Johnson on

    The comparison between Zelenskyy and Tucker Carlson seems like an odd and potentially misleading narrative. I wonder what the motivations are behind pushing that particular framing.

    • Robert Johnson on

      Yes, that comparison seems quite questionable and disconnected from the realities on the ground in Ukraine. It’s important to scrutinize such claims and understand the underlying agenda.

  2. While AI can be a powerful tool, it’s concerning to see it being used to spread misinformation and distort political narratives. This highlights the importance of digital literacy and critical thinking when consuming online content.

  3. Interesting use of AI for disinformation. While AI can be a powerful tool, it’s concerning to see it leveraged for political propaganda. I wonder what safeguards are in place to prevent this kind of misuse.

    • You raise a good point. AI-powered platforms like Grokipedia need robust fact-checking and oversight to prevent the spread of false or misleading information, especially around sensitive geopolitical issues.

  4. Elijah Hernandez on

    The reported examples of Russian propaganda appearing on Grokipedia are very worrying. AI-powered platforms must have robust fact-checking mechanisms and content moderation to prevent the dissemination of false or misleading information.

    • Absolutely. The credibility and reliability of these AI-driven platforms will be crucial in determining their long-term viability and public trust.

  5. William Hernandez on

    The use of AI to spread propaganda is a worrying development. I hope policymakers and tech leaders can work together to find effective ways to mitigate the risks and ensure these powerful technologies are not abused.

    • Agreed. Addressing this challenge will require a multifaceted approach, including improved AI governance, enhanced digital literacy, and stronger collaboration between the public and private sectors.

  6. Patricia Lopez on

    This is a concerning trend that highlights the potential for misuse of AI technology. It’s critical that we develop robust safeguards and ethical frameworks to prevent the weaponization of AI for disinformation campaigns.

  7. It’s troubling to see how Russia is leveraging AI to amplify its disinformation efforts. This underscores the need for greater regulation and oversight of emerging technologies to protect against malicious use.

  8. Leveraging AI for disinformation is a troubling trend that could have far-reaching consequences. I hope policymakers and tech companies work quickly to address this issue and establish robust guardrails.

    • Agreed. Proactive measures to ensure the responsible development and use of AI are essential to prevent it from being weaponized for malicious purposes like propaganda and manipulation.

  9. I’m curious to learn more about the specific tactics Russia is employing to propagate disinformation via AI. It’s a concerning trend that needs to be closely monitored and addressed.

    • William Taylor on

      Agreed. Transparency and accountability around the development and deployment of these AI systems will be crucial to mitigate their potential for misuse.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.