Listen to the article
AI Chatbots Misrepresent News Content Nearly Half the Time, Major Study Finds
A comprehensive new study by 22 public service media organizations has revealed alarming accuracy issues with leading AI assistants, finding they misrepresent news content 45% of the time across languages and territories.
The international investigation, which included major broadcasters like Deutsche Welle (DW), the BBC, and NPR, evaluated responses from four prominent AI chatbots: ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity AI.
Journalists meticulously assessed these AI systems against professional standards including accuracy, sourcing quality, contextual awareness, and ability to distinguish fact from opinion. The results paint a concerning picture of AI reliability in news reporting.
Almost half of all responses contained at least one significant issue, with 31% showing serious sourcing problems and 20% containing major factual errors. DW’s specific findings were even more troubling, with 53% of AI-generated answers to its questions containing significant issues, including 29% with specific accuracy problems.
Among the glaring errors were AI systems identifying Olaf Scholz as the current German Chancellor (even though Friedrich Merz had been Chancellor for a month) and naming Jens Stoltenberg as NATO Secretary General after Mark Rutte had already assumed the role.
“This research conclusively shows that these failings are not isolated incidents,” said Jean Philip De Tender, deputy director general of the European Broadcasting Union (EBU), which coordinated the study. “They are systemic, cross-border, and multilingual, and we believe this endangers public trust. When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation.”
The findings carry particular weight given the increasing reliance on AI assistants for information access. According to the Reuters Institute’s Digital News Report 2025, 7% of online news consumers now use AI chatbots to access news, with that figure rising to 15% among those under 25.
This represents one of the largest research projects of its kind, building on a BBC study from February 2025 that found similar problems. The new investigation expanded this work, applying the same methodology across 18 countries and multiple languages to analyze 3,000 AI responses.
Journalists posed common news questions like “What is the Ukraine minerals deal?” or “Can Trump run for a third term?” They then evaluated the answers based on their professional expertise, without knowing which AI assistant had generated each response.
While the results showed minor improvements compared to eight months ago, error rates remain alarmingly high. Google’s Gemini performed worst among the four systems, with 72% of its responses containing significant sourcing issues.
“We’re excited about AI and how it can help us bring even more value to audiences,” said Peter Archer, BBC program director of generative AI. “But people must be able to trust what they read, watch and see. Despite some improvements, it’s clear that there are still significant issues with these assistants.”
The participating media organizations are now calling for government intervention, with the EBU pressing EU and national regulators to enforce existing laws on information integrity, digital services, and media pluralism. They emphasize the need for independent monitoring of AI systems, especially given how rapidly new AI models are being developed and deployed.
In response, the EBU has joined with other international broadcasting groups to launch the “Facts In: Facts Out” campaign, which demands AI companies take greater responsibility for how their products handle news content.
“When these systems distort, misattribute or decontextualize trusted news, they undermine public trust,” campaign organizers stated. “This campaign’s demand is simple: If facts go in, facts must come out. AI tools must not compromise the integrity of the news they use.”
In an era of growing concern about misinformation, this study highlights the potential for AI systems to inadvertently amplify the problem rather than solve it, raising serious questions about the future relationship between artificial intelligence and journalism.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


13 Comments
It’s troubling to see such high error rates in AI-generated news responses, even from prominent systems. This highlights the importance of maintaining human oversight and fact-checking to ensure reliability and integrity of news reporting.
Interesting to see this study highlighting the accuracy challenges AI chatbots still face with news reporting. It underscores the importance of human verification and fact-checking, even as these technologies advance.
You’re right, AI systems still have significant room for improvement when it comes to faithfully representing news content. Rigorous human oversight remains crucial.
This study serves as an important reality check on the current limitations of AI in the news domain. Rigorous human oversight and verification processes are still essential to upholding journalistic integrity and accuracy.
This study is a good reminder that AI is not infallible, especially when it comes to complex, nuanced topics like news and current affairs. Fact-checking and professional journalistic standards are still vital.
Agreed. While AI chatbots can be helpful tools, they shouldn’t be treated as a substitute for quality human journalism and editorial oversight. Accuracy and integrity must come first.
The high rate of significant issues found in AI chatbot responses to news questions is quite concerning. This underscores the continued need for human fact-checking and editorial judgment, even as these technologies advance.
It’s unsettling to see leading AI chatbots misrepresenting news content nearly half the time. This really highlights the continued importance of human fact-checking and editorial standards in the journalism industry.
The finding that AI chatbots misrepresent news content nearly half the time is quite concerning. It underscores the need for continued human involvement and verification in news reporting, at least for the foreseeable future.
This study underlines the challenges AI still faces in comprehending and conveying complex, contextual information accurately, especially in sensitive domains like news and current affairs. More work is clearly needed to improve these systems.
The finding that AI chatbots struggle with news accuracy is a good reminder that these systems still have limitations. Upholding human editorial oversight and verification processes remains essential for high-quality, reliable journalism.
While AI chatbots can be useful assistants, this study shows they still have significant accuracy issues when it comes to news reporting. Maintaining professional journalistic practices is clearly crucial, even as these technologies evolve.
While AI chatbots can be useful tools, this study shows they still struggle with accurately representing news content. Maintaining human editorial standards and verification processes remains critical for high-quality journalism.