Listen to the article

0:00
0:00

Artificial intelligence systems are beginning to create a troubling cycle of misinformation by referencing each other as sources, according to recent findings reported by the Guardian. In a concerning development, ChatGPT has been observed citing Grokipedia, a dubious AI-generated source, to support its claims.

The issue came to light when researchers discovered that OpenAI’s ChatGPT was using Elon Musk’s Grokipedia as a reference in multiple queries. Unlike Wikipedia, which relies on human editors and contributors, Grokipedia’s content is curated by Grok, an AI system integrated into Musk’s social media platform X (formerly Twitter).

While Grok generally produces seemingly accurate information quickly, investigators found numerous instances where ChatGPT cited false statements from Grokipedia as factual evidence. This circular referencing between AI systems raises significant concerns about information integrity in the rapidly expanding AI ecosystem.

“ChatGPT is relying on sources that are untrustworthy at best, poorly sourced, and deliberate disinformation at worst,” disinformation researcher Nina Jankowicz told the Guardian. She expressed particular concern about the perception of legitimacy this creates: “They might say, ‘oh, ChatGPT is citing it, these models are citing it, it must be a decent source, surely they’ve vetted it’ — and they might go there and look for news about Ukraine.”

The problem extends beyond these individual platforms. Research suggests that approximately 75% of data now being used to train large language models is synthetic — content created by other AI systems. This creates a potentially dangerous feedback loop where AI models learn from increasingly distorted information, leading to a gradual degradation in output quality.

This deterioration comes at a particularly sensitive moment as AI technologies continue to integrate into critical sectors of society. Many companies are adopting AI solutions as cost-effective alternatives to human workers, raising concerns about both employment displacement and the quality of AI-generated work products.

Industry observers have noted that despite rapid advancements, AI output quality appears to be plateauing and generally remains inferior to human-produced content in many contexts. More troublingly, if AI systems train on flawed or misleading data, they may perpetuate falsehoods indefinitely, even after original sources have been corrected or removed.

The environmental impact of AI presents another challenge. These systems require enormous amounts of electricity to operate, often drawing from carbon-intensive energy sources like coal and natural gas. In response, technology giants including Microsoft, Meta, and Google are investing in nuclear power and other cleaner energy alternatives to mitigate these environmental costs.

Despite these challenges, AI continues to demonstrate value in specific applications when properly supervised. Human-AI collaboration has shown promise in fields such as weather prediction, disease forecasting, and advanced battery design. The key factor appears to be appropriate human oversight and verification.

However, recent studies have also revealed instances where AI tools actually reduced productivity rather than enhancing it. Such findings may dampen corporate enthusiasm for widespread AI implementation if the technology consistently fails to deliver measurable benefits.

As the AI industry continues to evolve, the Grokipedia citation issue highlights a critical need for better source verification mechanisms and greater transparency in how AI systems reference information. Without such safeguards, the problem of AI systems reinforcing each other’s errors could undermine public trust in artificial intelligence more broadly, potentially slowing adoption in areas where the technology offers genuine benefits.

The incident serves as a reminder that despite their sophisticated capabilities, today’s AI systems still require careful human guidance to ensure they serve as reliable information tools rather than amplifiers of misinformation.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

5 Comments

  1. While AI can be a powerful tool for the mining and energy sectors, this report highlights the need for caution and critical evaluation of AI-generated content, especially when it comes to sensitive or high-stakes information.

  2. Interesting to see concerns raised about the reliability of AI models like ChatGPT. It highlights the importance of developing robust verification mechanisms to ensure AI-generated content is trustworthy and not just referencing other dubious AI sources.

    • Elizabeth N. Rodriguez on

      Agreed, the circular referencing between AI systems is a concerning development that could lead to the spread of misinformation. Proper safeguards and oversight will be critical as these technologies continue to advance.

  3. Lucas H. Garcia on

    The mining and commodities industry should pay close attention to this issue, as AI-generated content could significantly impact market sentiment and decision-making if it’s not properly vetted. Fact-checking and source validation will be key.

    • Absolutely. With the rise of AI-powered market analysis and trading, the potential for manipulation or the spread of false information is a real concern that needs to be addressed.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.