Listen to the article

0:00
0:00

Social Media Misinformation Crisis Intensifies with AI, Reduced Fact-Checking

As news of the murders in Minneapolis broke over the weekend, a familiar pattern emerged online. Within hours of authorities announcing their search for suspect Vance Luther Boelter in connection with the killing of a state lawmaker and her husband, political figures were already making unverified claims about motives.

“It seems like he went after a Democratic legislator because she voted against Democrat Party policy,” claimed Donald Trump Jr. on social media, despite investigators not yet establishing any motive for the crimes.

This rapid spread of unverified information has become a hallmark of modern breaking news cycles, with similar patterns emerging during recent ICE protests in Los Angeles and President Trump’s military parade in Washington, D.C. In both cases, contradictory images and narratives flooded social media, making it nearly impossible for average users to determine what was actually happening.

“Trying to access verifiable and accurate information on the Internet at the moment is as difficult as I think it’s ever been,” said David Gilbert, a Wired reporter who has covered misinformation for over a decade. Gilbert noted that disinformation narratives have become predictable in their patterns, with the Los Angeles protests generating recycled footage from 2020’s George Floyd demonstrations, scenes from video games presented as real events, and conspiracy theories about paid protesters.

What makes today’s information landscape particularly treacherous is the emergence of artificial intelligence as a tool for creating and spreading misinformation. During recent events in Los Angeles, AI-generated videos purportedly showing National Guard members gained significant traction online despite obvious visual flaws.

“If you looked at the AI videos that people believed were real, there were issues with people’s faces, issues with the signs in the background, lots of very clear signals that it was AI,” Gilbert explained. “But people just don’t pay that much attention anymore. People want to be the first person to share it.”

This rush to be first, rather than accurate, creates an environment ripe for exploitation. Renee DiResta, author of “Invisible Rulers: The People Who Turn Lies into Reality,” points out that social media platforms inadvertently incentivize viral content over factual reporting.

“People begin to realize that during chaotic events, they could actually capitalize on that chaos,” DiResta said. “They could push out false and misleading stories… and they could actually monetize that.” Some platforms allow users to earn money from what appears to be breaking news content, regardless of its accuracy.

The problem is further complicated by how quickly misleading content jumps between platforms. DiResta described tracking a video that appeared to show ICE agents separating a mother from her child. The footage, which featured officers in NYPD uniforms, was likely old but was being presented as evidence of recent ICE raids. By the time DiResta could investigate its authenticity, approximately 6,000 people had already shared it.

Major social media companies are exacerbating the issue by rolling back fact-checking initiatives. Meta CEO Mark Zuckerberg recently announced plans to “get rid of fact-checkers and replace them with community notes similar to X.” This reduction in human verification comes as more users turn to AI chatbots like X’s Grok or ChatGPT to determine what’s real.

However, these AI systems often fail spectacularly at distinguishing fact from fiction. Gilbert cited a recent example where Grok incorrectly claimed that photos of National Guard troops posted by California Governor Gavin Newsom were actually from Afghanistan in 2021. In reality, the images were authentic and had been verified by both the San Francisco Chronicle and the Department of Defense.

“These AI chatbots, which have been so lauded as revolutionary, as cutting-edge from a tech perspective, still have huge issues in producing accurate, fact-checked, verified information,” Gilbert warned. “The reliance on these chatbots by a lot of people is a worrying escalation, because people are turning to them now because they don’t actually have human fact-checkers anymore at these companies that they can ask.”

The convergence of social media incentives, advanced AI, and reduced fact-checking creates a dangerous environment where consensus reality itself is at risk. As DiResta observed, “We’re in really chaotic times, and the ability to create very plausible unreality is only getting better, even as our trust in each other continues to decline.”

In a world where basic facts are increasingly contested, determining what is actually happening has become a challenge that threatens the foundation of informed democratic discourse.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

20 Comments

  1. Interesting update on Minnesota Lawmaker’s Murder Sparks Wave of Misinformation, PBS News Hour Reports. Curious how the grades will trend next quarter.

  2. Michael Martin on

    Interesting update on Minnesota Lawmaker’s Murder Sparks Wave of Misinformation, PBS News Hour Reports. Curious how the grades will trend next quarter.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.