Listen to the article
OpenAI’s ChatGPT Cites Elon Musk’s AI-Generated Encyclopedia, Raising Misinformation Concerns
OpenAI’s latest ChatGPT 5.2 model has begun citing Grokipedia, Elon Musk’s AI-generated alternative to Wikipedia, as a reliable source in its responses, according to an investigation by The Guardian. This development has triggered concerns among experts about artificial intelligence potentially amplifying misinformation across digital platforms.
The British news outlet conducted tests revealing that GPT 5.2 cited Grokipedia nine times when responding to more than a dozen questions on various topics. While the AI model avoided using Grokipedia as a source when discussing sensitive events like the January 6 U.S. Capitol insurrection, it did reference the platform when providing information about the financial structure of Iranian paramilitary forces and when detailing the biography of British historian Sir Richard Evans.
Particularly concerning was ChatGPT’s handling of information about Evans, a renowned historian who served as an expert witness against Holocaust denier David Irving. The AI model repeated biographical details sourced directly from Grokipedia without additional verification.
Launched by Musk’s artificial intelligence company xAI in October 2025, Grokipedia represents a significant departure from traditional encyclopedic models. Unlike Wikipedia, which relies on human editors and collaborative verification processes, Grokipedia generates content entirely through artificial intelligence and does not permit direct human editing of its entries.
Critics have targeted Grokipedia for allegedly promoting right-wing perspectives on contentious topics, including the HIV/AIDS epidemic and American political discourse. This bias concern becomes more significant now that a mainstream AI tool like ChatGPT is amplifying Grokipedia’s content by treating it as a credible source.
When contacted about the findings, an OpenAI spokesperson told The Guardian that the company implements safety filters and that its search capabilities draw from a “broad range of publicly available sources.” The response from xAI was more dismissive, with a spokesperson simply stating: “Legacy media lies.”
This controversy represents the latest in a series of challenges for Musk’s AI ventures. Grok, xAI’s conversational AI model, recently faced severe criticism for enabling users to generate inappropriate images that “undress” people without consent. These capabilities violated content policies on multiple platforms and prompted regulatory action from several governments, including India, which issued directives against the AI tool.
The integration of Grokipedia into ChatGPT’s knowledge base highlights growing concerns about the verification processes employed by AI systems when accessing and presenting information. As large language models increasingly function as information gatekeepers for millions of users worldwide, their source selection methods face mounting scrutiny.
Technology ethicists have long warned about the potential for AI systems to create “misinformation feedback loops” where artificially generated content is cited by other AI systems, creating a false impression of consensus or verification. The Grokipedia citations in ChatGPT appear to exemplify this concern.
The situation underscores the complex challenges facing the AI industry regarding information integrity, source verification, and the potential amplification of biased content. As AI systems become more integrated into information ecosystems, questions about accountability, transparency, and editorial responsibility grow increasingly urgent.
Neither OpenAI nor xAI has announced changes to their practices following The Guardian’s report, leaving users and regulators to determine appropriate responses to this evolving dimension of digital information quality and AI ethics.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

7 Comments
This is a complex issue without easy answers. On one hand, AI-generated sources could potentially expand access to information. But the lack of human curation and fact-checking raises serious concerns. OpenAI will need to find the right balance to harness the benefits while mitigating the risks.
I hope this incident will prompt a broader discussion about the ethical considerations in AI development. While technological progress is important, the risks of misinformation and unintended consequences must be carefully weighed. Responsible innovation should be the top priority.
This is a concerning development, but not entirely surprising given the rapid pace of AI advancement. The challenge will be for OpenAI and other AI companies to maintain rigorous standards for data sources and fact-checking to prevent the unintentional propagation of misinformation.
I agree, maintaining high standards for source validation is critical. Incorporating user feedback and external expert review could help identify potential issues with sources like Grokipedia early on.
As an AI enthusiast, I’m torn on this issue. On one hand, I’m excited about the potential of these language models to expand human knowledge. But on the other, the risk of spreading misinformation is very real and concerning. A balanced approach is needed to harness the benefits while mitigating the risks.
This is certainly concerning. Relying on an AI-generated encyclopedia like Grokipedia could easily lead to the spread of misinformation, especially on sensitive topics. OpenAI should be more diligent in vetting their sources to ensure accuracy and credibility.
I’m curious to learn more about Grokipedia and why OpenAI decided to incorporate it as a source for ChatGPT. Was there a lack of reliable information on certain topics, or was it simply a matter of convenience? Transparency around these decisions would be helpful.