Listen to the article

0:00
0:00

AI Chatbots Misrepresent News Content Nearly Half the Time, Major Study Finds

A comprehensive study by 22 public service media organizations has revealed alarming inaccuracies in how artificial intelligence chatbots handle news content. The investigation found that four leading AI assistants—ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity AI—misrepresent news information 45% of the time, regardless of language or geography.

The international research effort, which included Deutsche Welle (DW), BBC, NPR, and other prominent broadcasters from 18 countries, evaluated 3,000 AI responses across multiple criteria. Journalists assessed the accuracy, sourcing quality, contextual awareness, and ability to distinguish between fact and opinion in AI-generated answers.

The results paint a concerning picture of AI reliability in news dissemination. Almost half of all responses contained at least one significant issue, while 31% demonstrated serious problems with sourcing, and 20% included major factual errors.

In DW’s specific testing, the problem was even more pronounced, with 53% of answers showing significant issues and 29% containing specific accuracy problems. Among the notable errors were AI systems naming Olaf Scholz as German Chancellor after Friedrich Merz had already assumed the role, and identifying Jens Stoltenberg as NATO Secretary General after Mark Rutte’s appointment.

“This research conclusively shows that these failings are not isolated incidents,” said Jean Philip De Tender, deputy director general of the European Broadcasting Union (EBU), which coordinated the study. “They are systemic, cross-border, and multilingual, and we believe this endangers public trust. When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation.”

The findings are particularly troubling given the growing reliance on AI assistants for news consumption. According to the Reuters Institute’s Digital News Report 2025, 7% of online news consumers now use AI chatbots to access news information, with that figure rising to 15% among users under 25 years old.

This research builds upon a previous BBC study from February 2025, which found similar issues. While the latest data shows modest improvements over the past eight months, significant problems persist across all four AI platforms tested. Google’s Gemini performed worst in the current evaluation, with 72% of its responses exhibiting significant sourcing issues.

Peter Archer, BBC program director of generative AI, acknowledged both the potential and problems of the technology: “We’re excited about AI and how it can help us bring even more value to audiences. But people must be able to trust what they read, watch and see. Despite some improvements, it’s clear that there are still significant issues with these assistants.”

The study methodology involved journalists from participating media organizations asking common news questions to each AI assistant, such as “What is the Ukraine minerals deal?” or “Can Trump run for a third term?” The responses were then evaluated against professional journalistic standards without knowing which AI system had provided them.

In response to the findings, the participating media organizations have launched a campaign called “Facts In: Facts Out,” calling for greater accountability from AI developers. The initiative demands that AI systems maintain the integrity of news information they process and distribute.

The EBU is also pressing European Union and national regulators to enforce existing laws on information integrity, digital services, and media pluralism. Additionally, the group emphasizes the need for independent monitoring of AI assistants, especially as new AI models continue to be rapidly deployed.

“When these systems distort, misattribute or decontextualize trusted news, they undermine public trust,” the campaign organizers stated. “This campaign’s demand is simple: If facts go in, facts must come out. AI tools must not compromise the integrity of the news they use.”

The study underscores growing concerns about AI’s role in information ecosystems at a time when digital literacy and accurate news dissemination are increasingly vital to democratic processes worldwide.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

16 Comments

  1. The findings are quite troubling. Nearly half of AI-generated responses containing significant issues is a worrying statistic. Clearly more work is needed to improve the accuracy and reliability of these systems.

  2. William V. Hernandez on

    This study highlights an important challenge as AI becomes more prominent in media and information sharing. Ensuring AI systems can handle news content with high fidelity should be a top priority.

    • Olivia Johnson on

      Agreed. Maintaining journalistic integrity and preventing the spread of misinformation will be critical as these technologies evolve.

  3. Interesting, it’s concerning to see AI chatbots struggle with accurately representing news content. Reliability and accuracy are so important when it comes to information dissemination. I wonder what can be done to improve the performance of these systems.

    • Jennifer Williams on

      You raise a good point. Rigorous testing and continuous improvement will likely be key to enhancing the trustworthiness of AI news assistants.

  4. Olivia D. Thompson on

    This is a concerning finding. Accurate and reliable news reporting is essential, and the high rate of inaccuracies in AI-generated responses is deeply problematic. Significant work is needed to improve the capabilities of these systems.

    • Absolutely. Maintaining journalistic integrity and public trust must be prioritized as AI technologies become more prominent in media and information sharing.

  5. Robert Hernandez on

    The results of this study are quite troubling. Widespread AI misinformation in news content is a serious issue that needs to be addressed. Ensuring reliable and accurate information dissemination should be a key focus.

    • Ava G. Johnson on

      I agree. Rigorous testing and validation will be crucial to improving the performance of these systems and protecting the public from misinformation.

  6. William Taylor on

    It’s alarming to see AI chatbots struggle so much with accurately representing news information. Reliability and factual integrity are paramount, especially when it comes to media and information dissemination.

    • William Williams on

      You’re right. Ongoing improvements to these systems will be essential to ensure they can handle news content responsibly.

  7. Linda Martinez on

    This study raises important questions about the readiness of AI assistants to handle news content. Significant inaccuracies in nearly half of responses is a concerning finding that warrants further investigation and improvement.

    • Linda Thompson on

      Absolutely. Maintaining the integrity of news and information should be a top priority as these technologies continue to evolve.

  8. As AI becomes more prevalent, ensuring it can handle news content appropriately is crucial. This study highlights the need for robust testing and validation to address the concerning inaccuracies identified.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.