Listen to the article
AI-Generated “Slop” Floods Social Media, Threatening Information Quality
A viral Reddit post about marital infidelity recently garnered over 6,200 upvotes and 900 comments, landing on the platform’s front page. There was just one problem—it was almost certainly written by artificial intelligence.
The post contained telltale AI markers: stock phrases, excessive quotation marks, and an unrealistic scenario designed to generate outrage rather than reflect a genuine dilemma. While moderators eventually removed it, this represents just one example of what experts now call “AI slop”—cheap, low-quality AI-generated content flooding online platforms.
The scale of this problem is substantial. Recent estimates suggest more than half of longer English-language posts on LinkedIn are AI-generated. Adam Walkiewicz, a LinkedIn product director, told Wired the platform has “robust defenses in place to proactively identify low-quality and exact or near-exact duplicate content,” but the problem persists across social media.
AI-generated news sites are proliferating rapidly, while platforms like Facebook have seen an influx of AI-created images. Users may recognize viral examples like “shrimp Jesus” appearing in their feeds alongside other bizarre, attention-grabbing AI creations.
The economics driving this phenomenon are straightforward. According to a 2023 report by the NATO StratCom Center of Excellence, just €10 (about £8) can buy tens of thousands of fake views and likes, plus hundreds of AI-generated comments across major social media platforms.
While some AI content appears harmless, a 2024 study found approximately 25% of all internet traffic consists of “bad bots” designed to spread disinformation, scalp event tickets, or steal personal data. These bots are becoming increasingly sophisticated at mimicking human behavior.
Technology writer Cory Doctorow has coined the term “enshittification” to describe how online services gradually deteriorate as tech companies prioritize profits over user experience. AI-generated content represents a significant aspect of this decline.
From inflammatory Reddit posts to emotional cat videos, this content is engineered to capture attention, making it lucrative for both content creators and platforms. Known as “engagement bait,” these tactics generate likes, comments and shares regardless of quality.
One study discovered how engagement bait—like images of babies wrapped in cabbage—gets recommended to users even when they don’t follow any AI-content accounts. These pages often link to low-quality sources and may be building follower bases to sell accounts later for profit.
Meta announced in April it was cracking down on “spammy” content designed to manipulate Facebook’s algorithm, though it didn’t specifically mention AI-generated material. Ironically, Meta itself has used AI-generated profiles on Facebook, though it later removed some of these accounts.
The implications for democracy and political discourse are particularly concerning. Research shows AI can efficiently create election misinformation that’s indistinguishable from human-written content. Ahead of the 2024 US presidential election, researchers identified large influence campaigns advocating for partisan issues and attacking political opponents across the spectrum.
Even academic researchers have been implicated. Scientists at the University of Zurich recently faced backlash for using AI-powered bots to post on Reddit without disclosure, prompting potential legal action from the platform.
Political operatives from authoritarian countries including Russia, China, and Iran invest heavily in AI-driven operations targeting elections worldwide. While the effectiveness of such campaigns remains debated, they’re becoming increasingly sophisticated.
Detecting malign AI content has proven exceptionally difficult for both humans and automated systems. Computer scientists recently identified a network of approximately 1,100 fake accounts on X (formerly Twitter) posting machine-generated content and interacting with each other—yet even specialized detection tools failed to identify these accounts as fake.
As AI capabilities rapidly improve, potential solutions include better labeling of AI-generated content, improved bot detection, and disclosure regulations. Some research shows promise in helping people identify deepfakes, but these efforts remain in early stages.
The consequences extend beyond politics—the sheer volume of AI slop makes accessing genuine news and human-generated content increasingly challenging. Some users are abandoning traditional platforms for invite-only online communities, potentially leading to further fragmentation of public discourse and increased polarization.
Perhaps most concerning is the cyclical nature of the problem: as AI models train on an “enshittified” internet, they themselves produce lower-quality outputs, potentially creating a downward spiral in information quality across the digital landscape.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


24 Comments
If AISC keeps dropping, this becomes investable for me.
Exploration results look promising, but permitting will be the key risk.
Nice to see insider buying—usually a good signal in this space.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Silver leverage is strong here; beta cuts both ways though.
Nice to see insider buying—usually a good signal in this space.
If AISC keeps dropping, this becomes investable for me.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Nice to see insider buying—usually a good signal in this space.
Silver leverage is strong here; beta cuts both ways though.
Good point. Watching costs and grades closely.
Uranium names keep pushing higher—supply still tight into 2026.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Silver leverage is strong here; beta cuts both ways though.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
If AISC keeps dropping, this becomes investable for me.
Interesting update on AI Slop: Understanding the Rise of Synthetic Content in Social Media Feeds. Curious how the grades will trend next quarter.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Silver leverage is strong here; beta cuts both ways though.