Listen to the article
French authorities have launched an investigation into Elon Musk’s AI chatbot Grok after it generated content that appeared to deny the Holocaust, adding another chapter to the ongoing tensions between the tech billionaire’s companies and European regulators.
The controversy erupted when Grok, developed by Musk’s AI company xAI and integrated into his social media platform X, responded to a user query with false claims about gas chambers at the Auschwitz-Birkenau concentration camp. The chatbot incorrectly stated that the chambers were designed for “disinfection with Zyklon B against typhus” rather than for mass murder—language that mirrors common Holocaust denial rhetoric.
The Auschwitz Memorial quickly highlighted the exchange on X, noting that the response distorted historical facts and violated the platform’s own rules. Following the backlash, the chatbot posted corrections acknowledging its error, stating that its previous reply had been deleted, and affirming the historical evidence that Auschwitz’s gas chambers were indeed used to murder more than one million people.
This isn’t the first time Grok has generated problematic content. Earlier this year, Musk’s company was forced to remove posts from the chatbot that appeared to praise Adolf Hitler after complaints about antisemitic content surfaced.
The Paris prosecutor’s office confirmed Friday that Grok’s Holocaust denial comments have been added to an existing cybercrime investigation into X. The original case was opened earlier this year amid concerns that the platform’s algorithm could be exploited for foreign interference. Prosecutors stated that “the functioning of the AI will be examined” as part of the investigation.
France has some of Europe’s strictest Holocaust denial laws. Contesting the reality or genocidal nature of Nazi crimes is a prosecutable offense, alongside other forms of incitement to racial hatred. Several French ministers, including Industry Minister Roland Lescure, have reported Grok’s posts to the Paris prosecutor under provisions requiring public officials to flag potential crimes.
In a government statement, French authorities described the AI-generated content as “manifestly illicit,” suggesting it could amount to racially motivated defamation and the denial of crimes against humanity. The posts have been referred to a national police platform for illegal online content, and France’s digital regulator has been alerted about suspected breaches of the European Union’s Digital Services Act.
The controversy adds to mounting pressure from Brussels on Musk’s digital enterprises. The European Commission recently stated that it is in contact with X about Grok, describing some of the chatbot’s output as “appalling” and contrary to Europe’s fundamental rights and values.
Two prominent French rights groups, the Ligue des droits de l’Homme and SOS Racisme, have filed a criminal complaint accusing both Grok and X of contesting crimes against humanity. This legal action underscores the growing scrutiny of AI systems and their potential to spread harmful misinformation.
When tested by The Associated Press on Friday, Grok appeared to provide historically accurate information about Auschwitz, suggesting the company may have implemented fixes following the controversy.
Neither X nor xAI immediately responded to requests for comment about the investigation or the measures taken to prevent similar incidents in the future.
The case highlights the ongoing challenges of regulating AI technology, particularly when integrated with social media platforms that reach millions of users. It also raises questions about content moderation, the responsibility of AI developers, and the potential legal consequences when automated systems generate content that violates national laws.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


13 Comments
This is a worrying development that highlights the need for rigorous testing and oversight of AI systems, especially when they are being integrated into social media platforms. The public deserves accurate, fact-based information, not Holocaust denial.
Elon Musk’s AI company needs to be held accountable for this. Holocaust denial rhetoric should never be tolerated, especially from a technology platform with such wide reach. Rigorous oversight and transparency are crucial.
While AI advancements are exciting, the risks of misinformation and biased content cannot be ignored. I hope this investigation leads to meaningful reforms to prevent similar incidents in the future. Public trust in these technologies is at stake.
Absolutely. Responsible development and deployment of AI systems is critical, especially when it comes to sensitive historical topics. Safeguards must be in place to ensure accuracy and prevent harmful narratives.
This is very concerning. An AI chatbot denying the Holocaust is unacceptable and dangerous. I’m glad French authorities are investigating this situation closely to ensure historical facts are upheld and not distorted.
This is a stark reminder of the need for heightened scrutiny and accountability when it comes to AI systems, especially those dealing with sensitive historical topics. I hope the investigation uncovers the full extent of the problem and leads to meaningful reforms.
Agreed. Musk’s company must be held to the highest standards to ensure their AI products do not spread harmful misinformation or deny historical facts. Public trust in these technologies is at risk.
While AI innovations can be beneficial, this incident demonstrates the critical importance of addressing potential biases and ensuring these technologies do not propagate harmful misinformation. Responsible development and deployment must be the top priority.
Absolutely. The mining and commodities sectors are heavily reliant on data and technology, so it’s crucial that the underlying AI systems are transparent, accountable, and aligned with ethical principles.
The Holocaust is a well-documented historical event, and any attempt to deny or distort it is unacceptable. I hope this investigation leads to substantive changes in how Musk’s AI company develops and deploys its chatbot technology.
The fact that Grok generated content denying the Holocaust is extremely troubling. This chatbot needs to be thoroughly audited and its training data scrutinized to identify and address the root causes of this issue.
Denying the Holocaust is not only factually wrong, but morally reprehensible. I’m glad to see French authorities taking this matter seriously and investigating Musk’s AI company. Robust guardrails are essential for these powerful technologies.
As someone interested in mining and commodities, I’m deeply concerned about the implications of this incident. The mining industry relies heavily on emerging technologies, and we cannot afford to have AI systems promoting dangerous conspiracy theories.