Listen to the article

0:00
0:00

AI-Generated Videos Flood Social Media, Raising Alarm Over Disinformation

A surge in deceptive AI-generated videos across major social media platforms is raising serious concerns about public manipulation and the spread of misinformation, as technological advancements make fake content increasingly difficult to identify.

In the two months since OpenAI released its video generation tool Sora, experts tracking online disinformation have documented a flood of synthetic videos across TikTok, X, YouTube, Facebook, and Instagram. While many are harmless memes or fabricated footage of babies and pets, others are deliberately designed to inflame political tensions and spread misinformation.

One particularly troubling example involved a fake interview about food stamps that gained traction during a U.S. government shutdown, a time when actual recipients of the Supplemental Nutrition Assistance Program (SNAP) were struggling to feed their families. Fox News initially published an article featuring a similar video before later removing it from their website. When contacted, a Fox spokeswoman confirmed the removal but declined to elaborate further.

“Could they do better in content moderation for mis- and disinformation? Yes, they’re clearly not doing that,” said Sam Gregory, executive director of Witness, a human rights organization focused on technology threats. “Could they do better in proactively looking for AI-generated information and labeling it themselves? The answer is yes, as well.”

The problem extends beyond domestic politics. Russian disinformation campaigns have already deployed Sora videos on TikTok and X to exploit corruption scandals in Ukrainian leadership and create fabricated footage of frontline soldiers weeping. In India, AI videos denigrating Muslims have circulated to inflame religious tensions, including one purportedly showing a street vendor preparing biryani rice with gutter water.

Social media companies have policies requiring disclosure of AI use and prohibiting deceptive content, but these safeguards have proven woefully inadequate against tools like Sora. While companies like OpenAI and Google embed both visible watermarks and invisible metadata in their AI-generated videos, these protective measures can be easily circumvented.

“There’s kind of this individual vigilance model,” Gregory noted. “That doesn’t work if your whole timeline is stuff that you have to apply closer vigilance to. It bears no resemblance to how we interact with our things.”

Some platforms have begun taking additional steps. TikTok recently announced tighter rules around AI disclosure and promised new tools allowing users to control how much synthetic content appears in their feeds. YouTube uses Sora’s invisible watermark to append labels indicating when content has been “altered or synthetic.”

“Viewers increasingly want more transparency about whether the content they’re seeing is altered or synthetic,” said Jack Malon, a YouTube spokesman.

However, these labels often appear after thousands or even millions of viewers have already seen the videos—if they appear at all. The New York Times found dozens of examples of Sora videos on YouTube without the automated label. Several companies have emerged offering services to remove AI watermarks, and even simple actions like editing or resharing videos can strip away the embedded metadata indicating AI origin.

User response data highlights the problem’s scale. An analysis by The Times of more than 3,000 comments on the fake food stamps TikTok video revealed nearly two-thirds of users responded as if the content were authentic, despite the presence of an AI watermark that many viewers apparently missed while scrolling on mobile devices.

OpenAI defended its position in a statement, saying it prohibits deceptive uses of Sora and takes action against policy violators. The company emphasized that Sora is just one among dozens of similar tools, many of which employ no safeguards whatsoever.

“AI-generated videos are created and shared across many different tools, so addressing deceptive content requires an ecosystem-wide effort,” the company said.

Meta, which owns Facebook and Instagram, acknowledged the challenge, with a spokesperson noting it isn’t always possible to label every AI-generated video, particularly as the technology rapidly evolves. The company claims to be working on improving its detection systems.

Critics suggest platforms lack financial motivation to restrict these videos as long as they drive engagement. “In the long term, once 90 percent of the traffic for the content in your platform becomes AI, it begs some questions about the quality of the platform and the content,” said Alon Yamin, CEO of Copyleaks, an AI content detection company.

Former State Department officials James P. Rubin and Darjan Vujica warned in a recent Foreign Affairs article that AI advancements are intensifying efforts to undermine democratic countries and divide societies. “They are making things, and will continue to make things, much worse,” Vujica said. “The barrier to use deepfakes as part of disinformation has collapsed, and once disinformation is spread, it’s hard to correct the record.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. I’m curious to hear more about the specific examples of AI-generated videos that have already spread misinformation. What kind of content and narratives are they pushing, and how effectively are the platforms responding?

    • Linda Johnson on

      That’s a good question. The article mentions a troubling example of a fake interview about food stamps during a government shutdown. It’s worrying to see how quickly this type of content can gain traction and be amplified, even on reputable news sites.

  2. William Taylor on

    This is a concerning trend. As AI technology advances, we need to be vigilant about the potential for deceptive and manipulative content online. Fact-checking and media literacy will be crucial to combat the spread of misinformation.

  3. William G. Garcia on

    This is a complex issue without any easy solutions. While the platforms need to improve their content moderation, users also have a responsibility to be critical consumers of online information and to fact-check claims before sharing. Building digital literacy is key.

  4. The example of the fake food stamps interview is particularly troubling, as it could have real-world consequences for vulnerable populations. Fact-checking and transparency around the source and veracity of online content will be crucial going forward.

    • Oliver Johnson on

      Absolutely. Platforms need to take a more proactive and transparent approach to addressing AI-generated misinformation. Partnering with fact-checkers and being upfront about the limitations of their content moderation systems will be important steps.

  5. I appreciate that the article is highlighting this important challenge. As AI capabilities continue to advance, the potential for harm from synthetic media will only grow. Ongoing vigilance and proactive measures will be essential to stay ahead of these threats.

  6. This is an important issue that deserves serious attention. The flood of synthetic media across social platforms is a major challenge that will require a multi-faceted response from tech companies, policymakers, and the public. Vigilance and digital literacy will be key.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.