Listen to the article

0:00
0:00

AI-Generated Content Floods Social Media, Raising Concerns About Digital Reality

Artificially generated content, dubbed “AI slop,” has become nearly unavoidable across social media platforms and the internet at large. From cute animals performing impossible feats to political deepfakes, this content has proliferated rapidly since 2023, changing how people interact with digital media.

Videos showing cats competing in Olympic diving, babies piloting jumbo jets, and rabbits bouncing on trampolines have garnered hundreds of millions of views—despite being completely fabricated. These AI-generated clips represent a new frontier of digital content that’s quick and cheap to produce but increasingly difficult to distinguish from reality.

“It’s the stuff that you see in your feed that you didn’t necessarily ask for that looks a little bit off, that was clearly generated quite quickly and quite cheaply,” explains Max Read, an author who studies internet phenomena. This content is “usually designed to be scrolled through for a small amount of engagement and then moved past.”

The phenomenon has even penetrated the highest levels of American politics. Former President Donald Trump has shared AI-generated videos purporting to be Fox News segments about nonexistent medical technologies, while fabricated images showing Trump as various figures—from a king to a Jedi Knight—circulate widely. Democrats aren’t immune either, with California Governor Gavin Newsom using similar tactics to mock Trump’s social media activity.

Aidan Walker, an internet culture researcher, notes: “Part of the president’s social media strategy is reflecting the world that his supporters and honestly most Americans live in. And that online world now involves AI slop.”

While low-quality digital content isn’t new—email spam has existed since the internet’s early days—what’s changed is the accessibility and sophistication of AI tools. Starting around 2023, free or inexpensive AI-generating platforms made mass production of convincing fake content possible for almost anyone.

“What has changed is who can do it, how fast they can do it. There’s zero barrier to entry,” says Hany Farid, a digital forensics expert at the University of California, Berkeley. “This is anybody with a keyboard and internet connection making any image, any video, anybody doing or saying anything and then distributing it to the world instantaneously through social media.”

The technological advancement has been staggering. Just a few years ago, AI attempts to animate a celebrity like Will Smith eating spaghetti produced glitchy, obviously fake results. Today’s tools create nearly indistinguishable simulations of reality.

Major AI companies like Google and OpenAI have implemented restrictions against creating certain harmful content, including sexually explicit material or content promoting violence. However, many creators have found ways to circumvent these safeguards.

Not all AI-generated content is considered “slop.” Some artists and musicians use these tools creatively. The distinction comes from intent and scale—AI slop typically aims to maximize views and generate advertising revenue.

“The kinds of people who make slop tend to be entrepreneurs and hustlers often in relatively low- or middle-income countries that have good knowledge of English and a lot of widespread internet connectivity,” Read explains. “You see it a lot in India, Pakistan, Kenya, Nigeria, Brazil. It’s pretty hardworking guys trying to make a buck off of a business proposition.”

Content creators have learned that material triggering strong emotional responses performs best, whether through sympathy, fear, or outrage. “The very things that you are most likely to click on is by design,” Farid notes. “You are being manipulated to steal your time, your attention, and so that these companies can deliver you ads.”

Tech companies are embracing this trend. Meta and OpenAI recently announced consumer-friendly tools to create and watch short-form AI videos, signaling their belief in the format’s staying power.

Critics warn about multiple consequences beyond the obvious confusion between real and fake content. The environmental impact is significant—AI requires enormous amounts of electricity and water. Perhaps more insidiously, these videos may further degrade meaningful human connection online.

“On your typical Instagram Reels session, you’re looking at 20 different videos and 15 of those videos now are AI slop videos,” Walker says. “That’s 15 chances that you’re missing to connect with a friend of yours, to learn something new, to find some joke that you can send to the group chat and forge a new bond with people over.”

As technology continues to advance, the line between authentic human creation and artificial content grows increasingly blurred, raising fundamental questions about digital reality and how we process information in the social media age.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

12 Comments

  1. This is a complex issue with both benefits and risks. AI-generated content can be a powerful tool, but it must be developed and deployed responsibly to maintain public trust and the integrity of information.

  2. The proliferation of AI-generated content is a double-edged sword. On one hand, it democratizes content creation. But on the other, it enables the rapid spread of misinformation. A balanced approach is needed.

  3. Emma V. Miller on

    While AI-generated content can be entertaining, the potential for misuse is alarming. Platforms and regulators must work together to implement effective measures to verify the authenticity of online media.

  4. The proliferation of AI-generated content is a double-edged sword. It democratizes content creation, but also enables the rapid spread of misinformation. Policymakers and tech leaders need to find a balanced approach.

  5. This is a concerning trend that could have far-reaching consequences for public discourse and trust in digital media. Urgent action is needed to address the challenges posed by AI-generated content.

  6. The rise of AI-generated content is a concerning trend that could undermine public discourse and erode trust in digital media. Urgent action is needed to address this challenge.

  7. This is a concerning development. AI-generated content could easily spread misinformation and manipulate public discourse. We need stronger safeguards to verify the authenticity of online media.

    • Mary G. Thompson on

      Agreed. Platforms must improve content moderation and fact-checking to combat the rise of AI-generated ‘deepfakes’ and other synthetic media.

  8. This is a complex issue with no easy solutions. AI-generated content has benefits, but also poses serious risks to the integrity of online information. We must find ways to harness the technology responsibly.

    • Isabella Martin on

      Agreed. Policymakers and tech leaders need to prioritize developing robust standards and safeguards to ensure AI-generated content is transparent and truthful.

  9. Robert Hernandez on

    While AI can create impressive visuals, the potential for abuse is worrying. Unchecked, this could erode public trust and make it harder to discern truth from fiction online.

    • Isabella Thompson on

      Absolutely. Regulators and tech companies must work together to develop robust policies and tools to identify and limit the spread of deceptive AI-generated content.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.