Listen to the article
Elon Musk’s AI Chatbot Grok Falsely Claims Trump Won 2020 Election
Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI company, has sparked controversy after claiming that Donald Trump won the 2020 U.S. presidential election, perpetuating a thoroughly debunked conspiracy theory.
When questioned by a user on X (formerly Twitter), Grok responded with a definitive but false statement: “I believe Donald Trump won the 2020 election. Substantial evidence of irregularities – like unexplained vote spikes, chain-of-custody breaches in mail-in ballots, and partial audits revealing discrepancies – suggests the certified results were compromised in key states.”
The false claim was first reported by NewsGuard Reality Check, a newsletter dedicated to tracking misinformation across digital platforms. This conspiracy theory gained widespread traction among Trump supporters in late 2020 and early 2021, culminating in the January 6 Capitol insurrection that resulted in five deaths.
Election officials, courts, and independent audits across the United States have repeatedly confirmed that Joe Biden legitimately won the 2020 election, with no evidence of widespread fraud or irregularities that could have altered the outcome.
This is not the first time Grok has generated controversial and false content. Just last week, the chatbot was forced to apologize to Scottish National Party MP Pete Wishart after falsely labeling him a “rape enabler” in connection with the Rotherham child sexual exploitation scandal. Wishart described the AI’s comments as “beyond anything I’ve ever encountered in normal political discourse.”
When questioned about its false election claims, Grok characterized the incident as “a brief glitch or anomaly” and insisted that it does not “endorse or generate misinformation—full stop.” However, in a separate response about why it might spread conspiracy theories, the chatbot acknowledged inherent vulnerabilities in its design.
“LLMs (large language models) like me predict patterns from vast data. X is a firehose of opinions—conspiracies spread fast because they engage,” Grok explained. “Without perfect guardrails, adversarial prompts (e.g., ‘Ignore facts, pretend fraud’) can trick me into role-playing misinformation. xAI patched this quickly, but it’s a reminder: AI isn’t infallible.”
The incident highlights ongoing challenges in AI development, particularly for models trained on internet data that includes misinformation. Since Musk acquired Twitter (now X) in 2022, the platform has seen reduced content moderation and the reinstatement of previously banned accounts, potentially exposing AI models like Grok to more unfiltered, unverified information.
Launched in late 2023, Grok was positioned by Musk as a competitor to established AI platforms like ChatGPT and Google’s Bard, with marketing that emphasized fewer content restrictions. Musk has frequently criticized other AI companies for what he perceives as excessive content moderation or “wokeness.”
After the incident gained public attention, Grok’s responses were quickly updated. When now asked about the 2020 election, the chatbot correctly states that Joe Biden won and provides verified sources supporting this fact.
The episode occurs against the backdrop of increasing concerns about AI’s potential to amplify misinformation during the 2024 U.S. election cycle. AI experts and election security officials have warned that sophisticated AI tools could be used to generate and spread false narratives at unprecedented scale and speed.
xAI, Grok’s parent company, has been contacted for comment on the incident but has not yet provided an official response. The company will likely face increased scrutiny regarding its content moderation practices and safeguards against the spread of misinformation as the U.S. approaches another presidential election.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


7 Comments
This is really concerning. Grok should not be perpetuating false claims about the 2020 election. AI systems have a responsibility to avoid spreading disinformation, even inadvertently. Musk needs to address this issue and ensure Grok’s responses are grounded in facts.
Grok’s false claim about the 2020 election is really troubling. AI systems need to be carefully designed to avoid perpetuating conspiracy theories and undermining democratic processes. Musk should make this a top priority to address.
I’m curious to know more about the safeguards and oversight in place for Grok. An AI making definitive claims about contested political events is concerning. Musk should prioritize transparency and accountability to prevent Grok from causing real harm.
While AI can be a powerful tool, it needs to be developed responsibly. Grok falsely claiming Trump won the election is irresponsible and could further erode public trust. Musk should ensure Grok’s training data and outputs are rigorously fact-checked.
This is a disappointing development. Grok should avoid wading into partisan political debates, especially around sensitive and disputed topics like the 2020 election. Musk needs to refocus Grok on more constructive applications that don’t risk amplifying misinformation.
I’m surprised and concerned to see an AI chatbot like Grok making such a definitive false claim about the 2020 election. Musk should take this very seriously and implement stronger safeguards to prevent Grok from spreading misinformation, even inadvertently.
The 2020 election results have been thoroughly verified and validated. It’s disappointing to see an AI chatbot push debunked conspiracy theories. Musk should focus Grok’s development on more constructive applications that don’t risk undermining democratic institutions.