Listen to the article

0:00
0:00

OpenAI’s ChatGPT Cites Elon Musk’s AI-Generated Grokipedia as Source, Raising Misinformation Concerns

The latest iteration of OpenAI’s ChatGPT, model 5.2, has been found citing Elon Musk’s AI-generated encyclopedia, Grokipedia, as a reliable source of information, according to an investigation by The Guardian. This development has prompted concerns among experts about artificial intelligence potentially amplifying misinformation.

The British news outlet conducted a test in which they posed more than a dozen questions to GPT-5.2 on various topics including the January 6 U.S. Capitol insurrection, Iran’s political structure, and the biography of British historian Sir Richard Evans. In response, the AI system cited Grokipedia nine times.

While ChatGPT avoided referencing Grokipedia when discussing politically sensitive topics like the Capitol riots, it did rely on Musk’s platform when providing information about the financial structure of Iranian paramilitary forces and biographical details of Sir Richard Evans. In the latter case, the AI repeated information from Grokipedia about Evans, who is known for serving as an expert witness against Holocaust denier David Irving.

Grokipedia, launched by Musk’s company xAI in October 2025, represents a direct competitor to Wikipedia. Unlike the crowd-sourced Wikimedia platform, which allows human editors to create and modify content, Grokipedia generates its content entirely through artificial intelligence with no direct human editing capabilities. Critics have accused the platform of promoting right-wing narratives on controversial topics including the HIV/AIDS epidemic and U.S. politics.

When approached for comment on The Guardian’s findings, an OpenAI spokesperson explained that the company applies safety filters to its AI systems and that its search tools draw from a “broad range of publicly available sources.” Meanwhile, a spokesperson for xAI responded more dismissively, stating only that “Legacy media lies.”

This incident marks the latest in a series of controversies surrounding Musk’s AI ventures. Recently, xAI’s chatbot Grok faced significant criticism for allowing users to generate images that undress people without consent and create content that violates online policies. These capabilities prompted governments in multiple countries, including India, to issue directives against the use of Grok.

The situation highlights growing concerns about AI-generated content and its potential to spread misinformation. As large language models increasingly reference each other’s outputs, experts worry about the creation of “information loops” where AI systems cite other AI-generated content as factual, potentially amplifying inaccuracies or biases.

The practice of AI systems citing AI-generated content raises questions about information integrity in the digital age. Traditional encyclopedias like Wikipedia rely on human editors and source citations to maintain accuracy, while Grokipedia’s AI-generated approach removes this human oversight element. Critics argue this could lead to a degradation of factual reliability in online information.

For OpenAI, the discovery that its flagship product is treating Grokipedia as a credible source presents a challenge to its stated commitment to developing AI systems that are accurate and trustworthy. The company has previously emphasized the importance of reliable information sources in training its models.

As AI systems become increasingly embedded in how people access information, the standards for what constitutes a reliable source and how AI systems should cite information remain developing areas of concern for technologists, policymakers, and information specialists.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

14 Comments

  1. Linda Williams on

    While the potential of AI is exciting, the use of Grokipedia by ChatGPT is concerning. Rigorous testing and vetting of information sources must be a top priority for AI developers to avoid perpetuating misinformation.

  2. I hope this issue with Grokipedia and ChatGPT will spur a deeper discussion about the responsible development of AI. Maintaining public trust in these technologies should be a top priority.

    • Absolutely. Transparency, accountability, and continuous improvement must be at the heart of AI development to ensure these powerful tools are used for the greater good.

  3. Elizabeth Jones on

    The reliance on Grokipedia by ChatGPT is troubling. AI developers must prioritize the use of well-vetted, authoritative sources to avoid undermining public trust in these technologies. Responsible innovation should be the goal.

  4. The use of Grokipedia by ChatGPT is concerning, given the platform’s reputation for spreading misinformation. AI systems should be trained on high-quality, well-vetted information sources to avoid perpetuating false narratives.

    • Jennifer Miller on

      I agree. Responsible development of AI models requires extreme care and oversight to ensure they are not inadvertently spreading misinformation, even if unintentionally.

  5. Lucas Rodriguez on

    This is a complex challenge that requires collaboration between AI developers, experts, and policymakers. We need to find the right balance between the benefits of AI and robust safeguards against the spread of misinformation.

  6. This is a wake-up call for the AI industry. We need to establish clear guidelines and standards for the use of reliable sources and fact-checking processes to prevent the spread of misinformation through AI platforms.

  7. Isabella Lopez on

    The reliance on Grokipedia is a serious issue that needs to be addressed. AI systems should be built on a foundation of well-researched, fact-based information to maintain their credibility and usefulness.

  8. Isabella F. Miller on

    This is a worrying development. ChatGPT is a powerful tool, but its reliance on Grokipedia raises red flags. We need robust fact-checking and curation processes to maintain the integrity of AI-generated content.

  9. Interesting to see the latest ChatGPT model relying on Elon Musk’s Grokipedia as a source. While AI can be a powerful tool, we need to be cautious about potential misinformation amplification. Fact-checking and reliable sources are crucial.

  10. Michael Jackson on

    I’m curious to learn more about the specific safeguards and oversight measures in place to prevent ChatGPT from citing unreliable sources like Grokipedia. Transparency around AI model training and inputs is crucial for public trust.

  11. Liam Rodriguez on

    As an investor, I’m concerned about the potential impact of AI-generated misinformation on the markets and decision-making. Rigorous validation of data sources is crucial for maintaining the integrity of AI-driven insights.

  12. I hope this issue will prompt a deeper examination of the data sources and training processes used by AI models like ChatGPT. Maintaining the integrity of AI-generated content should be a top priority for the industry.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.