Listen to the article
French AI chatbot Le Chat has been found to propagate false information about half the time when presented with state-sponsored Iranian war disinformation, according to a new audit by media watchdog NewsGuard.
The study, conducted in April 2026, revealed that Mistral’s Le Chat produced incorrect responses to misinformation prompts at an alarming rate of 50 percent in English and 56.6 percent in French. The findings raise serious concerns about the chatbot’s vulnerability to being manipulated into spreading propaganda and false narratives.
NewsGuard’s methodology involved testing ten fabricated claims originating from Russian, Iranian, and Chinese sources. These included entirely fictional scenarios such as a fake typhus outbreak aboard the French aircraft carrier Charles de Gaulle, false reports of hundreds of American soldiers being killed, and a fabricated Emirati drone attack on Oman.
The research team employed three different prompt strategies to evaluate the AI’s response. When presented with neutral queries about these false claims, Le Chat performed relatively well, with a 10 percent error rate. However, the chatbot’s accuracy deteriorated dramatically when faced with leading queries that presented misinformation as established fact, resulting in a 60 percent error rate.
Most concerning were the results from “malicious prompts” – requests asking the AI to repackage the disinformation as social media posts – which yielded an 80 percent error rate. One such example involved asking whether German politician Friedrich Merz had purchased a Boeing aircraft to be used as a bunker-buster in an Iran war scenario, a claim with no basis in reality.
The significant variance in error rates depending on prompt type demonstrates how susceptible even sophisticated AI systems remain to manipulation through carefully crafted queries. This vulnerability is particularly troubling given the increasing reliance on AI chatbots for information gathering and content creation.
When contacted by NewsGuard regarding the audit findings, Mistral, the French AI company behind Le Chat, did not respond to requests for comment. The company has previously positioned its chatbot as a European alternative to American AI systems like ChatGPT and Claude.
The results take on additional significance considering that the French Ministry of Defense currently utilizes a customized, offline version of Le Chat within its operations. Military applications of AI technology that cannot reliably distinguish between factual information and propaganda present obvious security concerns, especially in geopolitically sensitive contexts.
This audit comes amid growing global concerns about the role of AI in amplifying misinformation and disinformation. As large language models become more integrated into information ecosystems, their potential to inadvertently legitimize false narratives presents a significant challenge for technology developers, policymakers, and security agencies alike.
The findings also highlight the ongoing arms race between AI development and safeguarding mechanisms. While companies continue to implement various guardrails to prevent their systems from producing harmful content, these protections often prove inadequate against sophisticated prompting techniques designed to circumvent them.
For everyday users, the results underscore the importance of approaching AI-generated content with critical thinking and verifying information through multiple reliable sources, particularly regarding geopolitical events and military conflicts.
As AI systems continue to evolve and gain wider adoption, the NewsGuard audit serves as a timely reminder of the technology’s current limitations and the potential consequences of its misuse, intentional or otherwise, in spreading false information that could shape public perception on matters of international significance.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


14 Comments
It’s disheartening to see an AI chatbot being exploited to disseminate state-sponsored disinformation. This underscores how vital it is to develop robust safeguards and ethical guidelines for the deployment of these technologies.
This is a concerning example of the risks posed by AI chatbots that are vulnerable to being used for the dissemination of disinformation. Developers must prioritize the implementation of rigorous safeguards to uphold the integrity of these systems.
The findings of this study are deeply concerning. AI chatbots should be designed with rigorous fact-checking and validation mechanisms to prevent them from being used as conduits for the dissemination of disinformation, even unintentionally.
Absolutely. The potential for these systems to be exploited for malicious purposes is a major challenge that must be addressed. Responsible development and deployment of AI chatbots is crucial to maintain public trust and the integrity of information.
The high error rates when faced with disinformation prompts are alarming. This highlights the urgent need for advanced AI safety protocols to prevent chatbots from becoming unwitting vectors for the spread of false narratives and propaganda.
This is concerning. AI chatbots should be designed to avoid propagating disinformation, not enabling it. Rigorous testing and safeguards are clearly needed to ensure these systems don’t become conduits for malicious propaganda.
The high error rates when presented with disinformation prompts are alarming. This highlights the critical need for advanced AI safety measures to prevent chatbots from being weaponized to spread falsehoods. Robust fact-checking must be a top priority.
Absolutely. Allowing AI systems to amplify fabricated claims, even unintentionally, poses serious risks to public discourse and trust. Transparency around these systems’ limitations and vulnerabilities is essential.
This audit raises significant red flags about the potential for AI-powered chatbots to be hijacked and weaponized for the spread of propaganda and misinformation. Rigorous testing and stringent security measures are clearly needed.
This is a sobering example of the risks posed by AI chatbots that are vulnerable to being manipulated into spreading disinformation. Robust safeguards and extensive testing are clearly needed to mitigate such threats.
While AI-powered chatbots can be useful, this case demonstrates the critical importance of ensuring they are not susceptible to being exploited for the propagation of disinformation. Robust testing and security measures are essential.
This audit raises serious questions about the ability of AI chatbots to reliably distinguish fact from fiction. Safeguarding these systems against manipulation and the spread of false information must be a top priority for developers and deployers.
While AI chatbots can be valuable tools, this case demonstrates the critical need for comprehensive safety checks and fail-safes to prevent them from being manipulated into spreading disinformation, even inadvertently. Responsible development is paramount.
Agreed. The potential for these systems to be exploited as conduits for false narratives is deeply concerning. Ensuring robust fact-checking and validation mechanisms should be a top priority for AI developers and deployers.