Listen to the article

0:00
0:00

NewsGuard Launches AI Detection Tool to Combat Misinformation Websites

NewsGuard has unveiled a new AI content farm detection tool aimed at stemming the rising tide of artificially generated misinformation flooding the digital news ecosystem.

Launched Thursday in collaboration with AI detection startup Pangram Labs, the tool identifies websites that primarily publish content created by large language models like ChatGPT, Claude, or Gemini. The system represents a technological countermeasure to an increasingly sophisticated problem that threatens to undermine information integrity online.

“If we can’t detect AI content, then every communication space is going to be flooded with inauthentic content that’s cheap to produce and difficult to impossible to differentiate from something authentic,” said Max Spero, Pangram’s CEO.

The detection process begins with Pangram’s proprietary AI models scanning entire domains for machine-generated content. Sites flagged by the algorithm undergo manual review by NewsGuard analysts, who verify the prevalence of AI content, check for transparency disclosures, and contact site owners for comment to prevent false positives.

NewsGuard categorizes websites as AI content farms based on three criteria: a substantial portion of AI-generated content as determined by Pangram’s technology; lack of disclosure about AI authorship; and presentation that could mislead average users into believing the content was human-created.

The system, which has undergone more than six months of testing, has already identified approximately 3,000 AI content farms—more than double what NewsGuard previously found using primarily manual techniques. Many of these sites operate under generic news-sounding names like “Times Business News” or “Business Post” while spreading misinformation about brands, political figures, and public health matters.

In one documented case, a site called Citizen Watch Report falsely claimed that Senators Lindsey Graham (R-SC) and Richard Blumenthal (D-CT) spent $814,000 on hotels in Ukraine. The fabricated story gained traction on social media and was amplified by Russian state media before being debunked.

Another example involved a site called News 24 falsely reporting that Coca-Cola threatened to withdraw its Super Bowl sponsorship over Bad Bunny’s halftime show performance—despite Coca-Cola not being a Super Bowl sponsor at all. The page displayed advertisements from major brands including AT&T, YouTube, Expedia, and Skechers.

Many of these websites fall into the category of “made-for-advertising” (MFA) sites—low-quality content operations designed solely to generate advertising revenue through traffic arbitrage.

“It’s a way to produce low-quality content for really low cost and generate some advertising revenue,” explained Matt Skibinski, NewsGuard’s chief operating officer. “Also, bad actors who want to spread false information have figured out that they can weaponize this technology and churn out, at a really high volume, false and misleading content and still make a quick buck because they run ads on those pages, too.”

The scale of the problem has grown dramatically in recent years. According to Pangram, between 300 and 500 new AI content farms emerge monthly. During a two-month observation period, NewsGuard identified 141 blue-chip brands inadvertently advertising on these sites.

NewsGuard hopes its detection tool will help both advertisers and consumers avoid AI-generated misinformation. The company will allow advertisers to license its data stream directly or through their agencies and has integrated with popular demand-side platform The Trade Desk to enable pre-bid blocking of these sites. The company is also considering adding the tool to its browser extension to help everyday users identify AI-generated content.

Pangram Labs, founded in 2023 by former engineers from Google and Tesla, has already received recognition for its technology’s effectiveness. A report in Nature highlighted Pangram’s capability to identify LLM-generated text in research papers and peer reviews. Several academic institutions have adopted the technology to combat undisclosed AI content in academic settings.

As AI-generated content proliferates, Spero anticipates growing demand for detection solutions: “There’s just going to be so much spam and bots and slop online that it’s going to be pretty unusable without technology to help you wade through the slop.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

22 Comments

  1. Elizabeth Martinez on

    This is a promising initiative to combat the rise of AI-powered disinformation. Identifying machine-generated content will be crucial for preserving the credibility of online information.

  2. This is an important development in the fight against AI-powered disinformation. Kudos to NewsGuard and Pangram for taking on this critical issue.

    • Absolutely. Maintaining trust in online information is essential, and tools like this can help ensure readers can distinguish authentic content from machine-generated.

  3. Elizabeth Smith on

    Detecting AI-generated misinformation is an increasingly complex challenge. This partnership between NewsGuard and Pangram sounds like a promising approach to address the problem.

  4. Isabella Miller on

    Detecting AI-generated content is a complex challenge, but this collaboration between NewsGuard and Pangram sounds like a step in the right direction. Combating digital misinformation is crucial.

  5. The rise of AI-generated content is a serious threat to the credibility of online information. This AI detection tool could be a valuable resource for fact-checkers and media outlets.

    • Isabella R. Thomas on

      Agreed. Empowering journalists and readers to identify artificially created content is a crucial step in maintaining a healthy information ecosystem.

  6. Mary G. Williams on

    Kudos to NewsGuard and Pangram for tackling the challenge of AI-generated misinformation. This is an important step in safeguarding the integrity of online information.

    • Isabella Garcia on

      Agreed. Developing effective AI detection tools is essential for maintaining trust in news and media, especially as large language models become more advanced.

  7. The partnership between NewsGuard and Pangram is a timely response to the growing threat of AI-generated misinformation. I’m eager to see how this new detection tool performs in practice.

    • Agreed. Maintaining the integrity of online news and information is vital, and this technology could be a valuable tool in that effort.

  8. The rise of AI-generated content is a serious concern for the integrity of online news and information. This partnership between NewsGuard and Pangram is a welcome response to this challenge.

  9. Lucas Jackson on

    This is an important development in the fight against digital misinformation. Kudos to NewsGuard and Pangram for tackling this critical issue head-on.

  10. Liam Williams on

    I’m curious to see how effective this new AI detection tool will be in practice. Combating misinformation is an ongoing battle, but technological solutions like this offer promise.

    • Olivia T. Brown on

      Absolutely. Transparency around the methodology and performance of the tool will be key to building trust and adoption among news consumers.

  11. I’m glad to see industry leaders like NewsGuard taking proactive steps to combat the spread of AI-powered disinformation. This technology could have far-reaching benefits.

    • Absolutely. Maintaining trust in online news and information is critical, and tools like this can help uphold standards of authenticity and transparency.

  12. Robert G. Thomas on

    Interesting partnership between NewsGuard and Pangram. Leveraging AI to detect AI-generated content is a clever approach to address this growing challenge.

    • Liam Johnson on

      Indeed, the ability to identify machine-generated content will be essential for preserving the integrity of online information.

  13. The ability to reliably detect AI-generated content is crucial in an era of rapidly evolving language models. This partnership is a welcome development in the fight against digital misinformation.

  14. Michael Jackson on

    This is an important step in combating the rise of AI-generated misinformation. Verifying the authenticity of online content is crucial for maintaining trust in news and information sources.

    • Isabella Smith on

      Agreed. AI detection tools like this can help readers discern legitimate reporting from artificially generated content.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.