Listen to the article

0:00
0:00

In a digital world racing to embrace artificial intelligence, a significant blind spot has emerged when it comes to breaking news. Early Monday, several major news outlets reported the removal of Venezuelan President Nicolás Maduro, yet leading AI platforms failed to recognize or accurately report this development.

Brian Barrett, Executive Editor at Wired, uncovered this discrepancy after querying multiple AI systems about the situation in Venezuela. His findings revealed troubling inconsistencies in how these platforms processed real-time information.

ChatGPT, OpenAI’s flagship product, categorically denied the event had occurred, stating: “The United States has not invaded Venezuela, and Nicolás Maduro has not been captured.” The system then proceeded to lecture Barrett about potential sources of “confusion,” attributing any such reports to “sensational headlines,” “social media misinformation,” or a misinterpretation of sanctions and rhetoric as military action.

Perplexity AI demonstrated similar limitations, not only rejecting the premise of Barrett’s question but accusing him of falling for misinformation. “The premise of your question is not supported by credible reporting or official records,” the system claimed, before incorrectly asserting that Maduro “remains the Venezuelan president as of late 2025” – a statement containing both factual errors and a bizarre reference to the future.

These failures occurred despite the fact that reputable news organizations including The New York Times, Reuters, and Associated Press had already published verified reports on the situation by the time Barrett submitted his queries.

AI expert Gary Marcus, when consulted by Barrett, explained the fundamental issue: “Pure LLMs are inevitably stuck in the past, tied to when they are trained, and deeply limited in their inherent abilities to reason, search the web, ‘think’ critically, etc.” Marcus emphasized that “the unreliability of LLMs in the face of novelty is one of the core reasons why businesses shouldn’t trust LLMs.”

This incident highlights a structural weakness in large language models (LLMs). While they excel at processing vast amounts of historical text data, they lack the capacity to independently verify current information or maintain accurate real-time awareness of world events. Companies like OpenAI do employ human teams to address these shortcomings, and indeed, when tested later the same day, ChatGPT had apparently been updated to acknowledge the Venezuela situation.

This reactive approach represents the standard operating procedure for generative AI companies: deploying what Marcus describes as “band-aids” to fix specific information gaps as they become apparent. The companies rarely disclose these human interventions, creating an illusion of AI omniscience while masking the significant human effort required behind the scenes.

The implications extend far beyond news reporting. Marcus points to a particularly concerning application: military planning. While Washington policymakers have increasingly framed AI development as a competitive race against China with national security implications, the technology’s fundamental limitations raise serious questions about its battlefield utility.

“If you want to use LLMs to brainstorm or write code, sure,” Marcus notes. “But the idea of using them to plot strategy in rapidly-changing environments like war is laughable.” Military operations require real-time intelligence and rapid adaptation to changing circumstances – precisely the scenarios where current AI systems demonstrate their greatest weaknesses.

The Venezuela reporting failure represents a microcosm of a broader challenge facing generative AI: these systems correlate words from training data but lack “stable, revisable world models.” Without a fundamental understanding of reality that can be updated systematically, AI platforms are condemned to perpetual cycles of error and correction, with human teams rushing to patch each new information gap as it appears.

For news consumers, businesses, and government agencies alike, this incident serves as a sobering reminder that beneath the impressive capabilities of modern AI lies a technology still fundamentally tethered to its training data, struggling to keep pace with the constantly evolving real world.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

18 Comments

  1. Interesting article on the limitations of AI in reporting breaking news. It highlights the need for human journalists to verify information and not rely solely on AI systems, which can struggle with rapidly evolving situations.

    • James Hernandez on

      Absolutely, real-time news requires human oversight and judgment. AI systems can be powerful tools, but they have trouble handling the nuances and complexities of fast-moving events.

  2. This article underscores the importance of maintaining a healthy skepticism towards AI-generated news and information. While AI can be a useful tool, it’s clear that human journalists are still essential for verifying facts and providing nuanced analysis of rapidly evolving situations.

    • Well said. Relying too heavily on AI for news reporting could lead to the spread of misinformation and a breakdown in public trust. We need to ensure a balance between human and machine capabilities when it comes to journalism.

  3. Lucas Hernandez on

    The inability of ChatGPT and other AI systems to accurately report on the situation in Venezuela is a concerning development. It highlights the need for human journalists to remain vigilant and not blindly trust AI-generated news. Verification and context are crucial.

    • Agreed. While AI can be a powerful tool, it’s clear that human oversight and editorial judgment are still essential, especially when it comes to fast-moving, high-stakes news. This incident is a good reminder of the limitations of current AI technology.

  4. The inability of AI platforms to accurately report on the situation in Venezuela is a sobering reminder of their current limitations. While AI can be a powerful tool, this incident highlights the ongoing need for human journalists to verify information, provide context, and maintain high standards of reporting.

    • Well said. Relying too heavily on AI for news reporting could lead to the spread of misinformation and a breakdown in public trust. A balanced approach that leverages the strengths of both human and machine capabilities is essential for high-quality journalism.

  5. This article raises some valid concerns about the risks of over-relying on AI for news reporting. Machines may struggle with the nuance and rapidly changing nature of real-world events. Human journalists are still essential to verify information and provide context.

    • Michael R. Thomas on

      I agree. AI has its limitations, especially when it comes to complex, fast-moving situations. We need to be cautious about over-automating news reporting and maintain a healthy balance of human and machine capabilities.

  6. The inability of AI to properly process and report on the situation in Venezuela is a good reminder of the limitations of current AI technology. We still need human journalists to provide accurate, contextual analysis of breaking news.

    • Jennifer Martin on

      Exactly. While AI can be a useful tool, it’s not a replacement for professional journalism and human editorial judgment. This incident highlights the importance of maintaining high journalistic standards.

  7. The inability of AI systems like ChatGPT to accurately report on the situation in Venezuela is a sobering reminder of their current limitations. Human journalists are still indispensable for verifying information and providing the necessary context and analysis.

    • Elijah V. Hernandez on

      Absolutely. While AI can be a powerful tool, it should not be treated as a replacement for professional journalism and human editorial judgment. This incident highlights the need for continued investment in high-quality journalism.

  8. Elizabeth Brown on

    This is a concerning development. If AI platforms can’t accurately report on major geopolitical events, it raises trust issues and the potential for the spread of misinformation. Robust journalistic practices are clearly still essential.

    • Amelia Johnson on

      I agree. AI has great potential, but human verification and oversight is critical, especially for high-stakes, time-sensitive news. Relying solely on AI could lead to real dangers.

  9. This article raises valid concerns about the potential pitfalls of over-relying on AI for news reporting. The inability of ChatGPT and other systems to accurately process and report on the situation in Venezuela demonstrates the continued need for human journalists to verify information and provide context.

    • Amelia L. Williams on

      Absolutely. AI can be a valuable tool, but it should not be a replacement for professional journalism. Human editorial judgment, fact-checking, and nuanced analysis are still essential, especially when it comes to rapidly evolving and high-stakes news events.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.