Listen to the article

0:00
0:00

Google’s AI Overviews Feature Produces Millions of Incorrect Answers Daily, Study Finds

A recent analysis has revealed that Google’s AI Overviews feature, which provides summary information above search results, may be far less reliable than previously thought despite its seemingly impressive accuracy rate.

The study, conducted by AI startup Oumi for The New York Times, found that Google’s AI Overviews provided correct information approximately 9 out of 10 times. While this 90% accuracy rate might initially sound commendable, the sheer scale of Google’s search operation puts these errors into alarming perspective.

Given that Google processes roughly five trillion search queries annually, the analysis estimates that AI Overviews generates tens of millions of incorrect answers every hour—translating to hundreds of thousands of erroneous responses every minute. This volume of misinformation raises significant concerns about the reliability of information presented to users who may trust these AI-generated summaries without further verification.

Perhaps more troubling is that the study found more than half of the seemingly accurate responses to be “ungrounded.” This means they linked to websites that didn’t fully support the information provided in the overview, making it difficult for users to verify the accuracy of these AI-generated summaries. Both Futurism and The New York Times characterized this issue as a potential misinformation crisis.

To assess the accuracy of AI Overviews, Oumi employed a benchmark test called Simple QA, which is widely used in the industry to evaluate AI system accuracy. The analysis was conducted twice: first using Gemini 2 in October, and later with the upgraded Gemini 3 in February. Each test analyzed 4,326 searches, with results showing Gemini 2 achieved 85% accuracy while Gemini 3 improved to 95%.

Google has acknowledged the issue but appears to be downplaying its significance. “Our Search AI features are built on the same ranking and safety protections that block the overwhelming majority of spam from appearing in our results,” Google spokesperson Ned Adriance told the NYT, adding that the benchmark “doesn’t reflect what people are actually searching on Google.”

The findings come at a critical time for Google, which has been aggressively expanding its AI features across its product ecosystem. The AI Overviews feature was recently expanded to Canada as part of Google’s broader AI integration strategy. The company has positioned these AI-powered features as enhancements to the user experience, providing faster access to information without requiring users to click through to source websites.

This study raises important questions about the balance between convenience and accuracy in AI-generated content. While AI can process and present information quickly, its ability to consistently provide accurate information at scale remains problematic. For users who rely on Google as their primary information source, the possibility that one in ten AI-generated overviews contains incorrect information presents real concerns.

The findings also highlight the challenges facing the broader AI industry as companies race to integrate artificial intelligence into consumer-facing products. As these systems become more deeply embedded in information discovery tools, ensuring their accuracy becomes increasingly critical to maintaining public trust.

For now, the study suggests users should approach AI-generated information summaries with healthy skepticism and verify important information through multiple sources rather than accepting AI Overviews at face value.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

11 Comments

  1. Elizabeth Lopez on

    As someone with a background in mining and commodities, I’m not surprised to hear about the issues with Google’s AI overviews. The complexities and nuances of these industries require careful, human-led research and analysis. Relying on AI alone for summary information is clearly risky.

  2. Oliver Jackson on

    Interesting findings on the potential risks of AI overviews. Given the scale of Google’s search volume, even a 90% accuracy rate could lead to millions of incorrect responses. This highlights the need for more transparency and accountability around AI-generated content.

    • You’re right, the sheer scale of the problem is concerning. Verifying the accuracy of these AI summaries is crucial before trusting the information they provide.

  3. This is an important issue for the mining and commodities sectors, where information accuracy is critical. AI overviews could inadvertently spread misinformation that sways investor decisions or public perception. Rigorous testing and oversight seem necessary.

    • Patricia Taylor on

      I agree. With so much money and resources at stake in the mining industry, relying on potentially flawed AI summaries is risky. Fact-checking and human review should be the standard.

  4. As an investor in mining and energy equities, I’m quite concerned about the implications of this study. Inaccurate AI overviews could lead to poor investment decisions and market volatility. Oversight of these systems is clearly needed to protect consumers and investors.

  5. Mary Q. Taylor on

    This is a wake-up call for the mining and energy sectors. Inaccurate AI overviews could have serious consequences, from misleading investors to swaying public opinion. Ensuring the reliability of information, especially in these critical industries, should be a top priority.

    • Olivia Hernandez on

      Absolutely. The scale of the problem is staggering, and the potential impact on industries like mining and energy is deeply concerning. Rigorous oversight and transparency around these AI systems are essential.

  6. While AI has many benefits, this study highlights the risks of over-relying on it, especially in industries like mining and commodities where accuracy is paramount. The lack of accountability and transparency around these AI overviews is troubling and warrants immediate attention.

  7. John T. Thompson on

    This is a concerning development, especially for industries like mining that rely heavily on factual, up-to-date information. While AI can be a powerful tool, the lack of accountability and transparency around these AI overviews is troubling. More rigorous testing and human review seem essential.

    • James Thompson on

      I agree completely. The scale of the problem, with millions of incorrect responses per day, is alarming. Robust fact-checking processes need to be put in place to ensure the reliability of AI-generated content, especially for sensitive industries like mining.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.