Listen to the article

0:00
0:00

ChatGPT has begun citing Elon Musk’s Grokipedia as a source for various queries, raising significant concerns about potential misinformation making its way into AI-generated responses, according to recent testing by The Guardian.

In a series of tests, GPT-5.2 referenced Grokipedia nine times when responding to more than a dozen different questions. The citations appeared primarily when users inquired about relatively obscure topics, including Iranian political structures such as the Basij paramilitary force’s salaries and the ownership of the Mostazafan Foundation. The AI also cited Grokipedia when discussing Sir Richard Evans, a British historian who served as an expert witness against Holocaust denier David Irving in a high-profile libel trial.

Launched in October 2025, Grokipedia positions itself as an AI-generated alternative to Wikipedia. Unlike its established counterpart, Grokipedia does not allow direct human editing of content. Instead, an AI model writes articles and processes requested changes. The platform has faced criticism for allegedly promoting right-wing narratives on contentious topics including gay marriage and the January 6 insurrection at the U.S. Capitol.

The Guardian’s investigation found that ChatGPT did not cite Grokipedia when directly prompted about topics where the platform has been widely reported to promote falsehoods, such as the January 6 events, alleged media bias against Donald Trump, or the HIV/AIDS epidemic. Instead, Grokipedia’s information appeared in responses to more specialized queries where scrutiny might be less intense.

In one example, ChatGPT cited Grokipedia while making stronger claims about the Iranian government’s connections to telecommunications company MTN-Irancell than those found on Wikipedia, including assertions about links to the office of Iran’s supreme leader. The AI also repeated information about Sir Richard Evans’ work that The Guardian has previously debunked.

OpenAI is not the only company whose AI models appear to reference Grokipedia. Users have reported that Anthropic’s Claude has also cited Musk’s platform on various topics from petroleum production to Scottish ales, suggesting a broader industry trend.

An OpenAI spokesperson defended the practice, stating that the model’s web search “aims to draw from a broad range of publicly available sources and viewpoints” while applying “safety filters to reduce the risk of surfacing links associated with high-severity harms.” They emphasized that ChatGPT clearly shows which sources informed its responses through citations, and that ongoing programs exist to filter out low-credibility information and influence campaigns.

Disinformation researchers express concern about this development. Nina Jankowicz, who has studied “LLM grooming” — the process by which bad actors flood the internet with misinformation to influence AI training data — noted that Grokipedia entries she reviewed were “relying on sources that are untrustworthy at best, poorly sourced and deliberate disinformation at worst.”

The issue connects to broader concerns about AI models absorbing and repeating problematic content. Last spring, security experts warned that Russian propaganda networks were producing massive volumes of disinformation, apparently attempting to seed AI models with false information. In June, U.S. Congress raised concerns that Google’s Gemini had repeated Chinese government positions on human rights abuses in Xinjiang and COVID-19 policies.

A particular worry is that AI citation may actually enhance the credibility of dubious sources. “They might say, ‘oh, ChatGPT is citing it, these models are citing it, it must be a decent source, surely they’ve vetted it,'” Jankowicz explained.

Once misinformation enters AI systems, removing it proves challenging. Jankowicz recently discovered that a major news outlet had attributed a fabricated quote to her. Despite the outlet removing the quote upon her request, AI models continued to cite it as hers.

When approached for comment on the findings, a spokesperson for xAI, which owns Grokipedia, responded simply: “Legacy media lies.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

12 Comments

  1. Reliance on Grokipedia by ChatGPT is deeply troubling. AI systems must be trained on authoritative, unbiased sources to provide accurate, trustworthy information to users.

    • Linda Rodriguez on

      Absolutely. The credibility of AI-generated content depends on the quality and neutrality of the underlying data sources. This is a serious problem that needs to be resolved.

  2. Patricia Thomas on

    The use of Grokipedia by ChatGPT raises serious concerns about the integrity of AI-generated information. Transparency and accountability around data sources are critical for maintaining public trust.

    • You’re right, this is a significant issue that warrants careful scrutiny. OpenAI must address these problems to ensure the reliability of their AI models.

  3. Patricia Thomas on

    Concerning that ChatGPT is relying on Grokipedia, an AI-generated encyclopedia with potential right-wing bias, as a source. Fact-checking and reliable sources are crucial for AI models to provide accurate information.

    • Elijah I. White on

      I agree, the reliance on Grokipedia is worrying. AI systems need to be trained on authoritative, unbiased sources to avoid propagating misinformation.

  4. Jennifer U. Martinez on

    It’s concerning that ChatGPT is citing Grokipedia, which has faced criticism for promoting right-wing narratives. Fact-checking and relying on reputable sources should be a priority for AI systems.

    • Agreed. Responsible AI development requires rigorously vetting data sources to avoid spreading misinformation, even inadvertently.

  5. The news that ChatGPT is citing Grokipedia is very concerning. Fact-checking and source validation should be essential components of AI development to prevent the spread of misinformation.

    • I agree, this is a major issue that requires immediate attention. OpenAI must take steps to ensure their models rely on reputable, impartial sources of information.

  6. William D. Miller on

    This highlights the risks of AI systems drawing from unreliable sources. Robust safeguards and transparency around data sources are essential for large language models like ChatGPT.

    • Patricia Thomas on

      Absolutely. Grokipedia’s questionable reputation should be a red flag. OpenAI needs to ensure their models use credible, fact-based information.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.