Listen to the article

0:00
0:00

Google’s AI Overviews Spreading Misinformation at Unprecedented Scale

Google’s AI-generated search summaries are delivering false information to millions of users every hour, according to a new analysis that raises serious concerns about the technology’s widespread deployment.

A recent study conducted by AI startup Oumi for The New York Times found that Google’s AI Overviews—the AI-generated summaries that appear at the top of search results—are accurate approximately 91 percent of the time. While this figure might appear impressive in isolation, the scale of Google’s search operation transforms it into a significant problem.

Google processes roughly five trillion search queries annually. This volume means that even with a 91 percent accuracy rate, the system is generating tens of millions of incorrect answers every hour—or hundreds of thousands every minute, according to Oumi’s calculations.

The findings are particularly troubling given research on how users interact with AI systems. Studies have consistently shown that people tend to accept AI-generated information without verification. One report found that only 8 percent of users actually double-check an AI’s answers, while another experiment revealed that users followed incorrect AI advice nearly 80 percent of the time—a phenomenon researchers have labeled “cognitive surrender.”

“Large language models adopt an authoritative tone and can confidently present fabricated information as fact,” the analysis noted. Combined with the convenience of Google’s AI Overviews, this creates a perfect storm for widespread misinformation.

Oumi conducted its analysis using SimpleQA, an industry benchmark for AI accuracy developed by OpenAI. Researchers tested the feature twice—first in October using Google’s Gemini 2 model, and again in February after Google upgraded to Gemini 3.

Each testing round involved 4,326 Google searches. The results showed improvement between iterations, with Gemini 3 achieving 91 percent accuracy compared to Gemini 2’s 85 percent. While this indicates progress, it also reveals that Google rolled out earlier versions of the technology knowing they were more prone to errors.

Google disputed the findings, with spokesman Ned Adriance telling the Times, “This study has serious holes. It doesn’t reflect what people are actually searching on Google.” However, the Times reported that Google’s own internal analysis found Gemini 3 produced incorrect information 28 percent of the time, though the company claims AI Overviews perform better because they incorporate Google search results before generating answers.

Perhaps most concerning is the issue of “ungrounded” responses—AI-generated answers that cite websites that don’t actually support the information provided. The analysis found this problem has worsened, with ungrounded responses increasing from 37 percent with Gemini 2 to 56 percent with Gemini 3. This trend not only suggests the AI is fabricating information but also makes it significantly harder for users to verify claims.

The findings come amid growing scrutiny of AI deployment by major tech companies. Microsoft recently faced criticism for terms of service language that described its Copilot AI as being for “entertainment purposes only,” seemingly undermining claims about the technology’s reliability for serious applications.

As Google and other tech giants continue embedding AI into products used by billions of people worldwide, the Oumi analysis highlights the tension between rapid AI deployment and ensuring information integrity. With only a small percentage of users verifying AI-generated content, even modest error rates can contribute to a substantial volume of misinformation circulating online.

The analysis raises important questions about responsibility and transparency as AI becomes increasingly integrated into how people access information online, potentially reshaping public understanding of facts on a scale unprecedented in human history.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments

  1. Jennifer Taylor on

    I’m somewhat skeptical of the claims in this article. While the potential for AI-driven misinformation is concerning, the figures cited seem quite high. I’d want to see more detailed analysis and data before fully accepting these conclusions.

    • Elijah White on

      I share your skepticism. The figures sound quite alarming, so it would be good to understand the methodology and sample size used in the study. Balanced, fact-based reporting is important on these complex AI-related issues.

  2. Elizabeth White on

    This is a really important issue that goes beyond just the mining/commodities space. AI-powered information systems are becoming ubiquitous, and the risk of large-scale misinformation is a serious threat to society as a whole. Robust safeguards and transparency are crucial.

  3. Isabella V. Thomas on

    This is a complex issue without easy solutions. While the scale of the problem is concerning, I’m curious to learn more about the specific types of misinformation being generated and how it might impact different industries and stakeholders. A nuanced, evidence-based approach will be key.

  4. As someone interested in the mining and commodities space, I’m curious to see how this issue impacts discussions and decision-making around things like mineral exploration, supply chains, and investment opportunities. Accurate, reliable information is so important in these areas.

    • Noah Hernandez on

      Good point. Inaccurate AI-generated summaries could lead to poor decisions by investors, policymakers, and others in the mining/commodities industry. Transparency and accountability around these systems will be crucial going forward.

  5. Wow, this is quite concerning. AI-generated information being so widely distributed, even with a decent accuracy rate, could have significant real-world consequences. It’s crucial that users remain vigilant and verify the information they’re consuming, especially from AI sources.

    • Ava Martinez on

      I agree, the scale of the problem is alarming. Misinformation can spread like wildfire online, and AI-powered systems amplify that risk. Rigorous testing and oversight of these technologies is clearly needed.

  6. Elizabeth Y. Johnson on

    As an investor in mining and energy-related equities, I’m very interested in understanding the potential impacts of this AI misinformation problem. Could it lead to volatility or disruption in commodity markets and stock prices? I’ll be watching this issue closely.

    • Michael Smith on

      Good point. Inaccurate AI-driven information could absolutely introduce more uncertainty and volatility into commodity and equity markets. Investors will need to be extra vigilant in verifying information from online sources.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.