Listen to the article
The Rise of AI Slop: How Artificial Content Is Reshaping Our Visual Reality
Two parallel visual channels now dominate our daily consumption. One presents authentic images of real-world events spanning politics, sports, news, and entertainment. The other delivers what many have labeled “AI slop”—low-quality content requiring minimal human input that floods our digital spaces.
This artificially generated material ranges from the banal—cartoonish celebrity images and fantasy landscapes—to more problematic content like idealized portrayals of women as virtual companions. The scale and diversity of this content is staggering, infiltrating social media timelines and messaging platforms like WhatsApp. The result isn’t merely a blurring of reality, but a fundamental distortion of how we perceive our world.
A particularly concerning trend is the emergence of right-wing political fantasy content. Entire YouTube channels now feature AI-generated scenarios where Trump officials triumph over liberal opponents. Even official government accounts have joined this trend—the White House X account recently posted a Studio Ghibli-style image depicting a Dominican woman in tears during an Immigration and Customs Enforcement (ICE) arrest.
The political weaponization of AI has gone global. Chinese-generated videos mocking overweight American factory workers following tariff announcements prompted an official White House response last week. The spokesperson defended American workers against what she described as AI-created content that “does not see the potential of the American worker.”
While propaganda isn’t new, what distinguishes this phenomenon is its democratization and ubiquity. Unlike traditional propaganda, AI-generated content isn’t constrained by physical reality or the need for actual human participants, allowing for an infinite array of fictional scenarios tailored to specific political agendas.
The distribution of this content through messaging platforms like WhatsApp creates additional verification challenges. When users receive AI-generated content from trusted contacts, the material inherits the sender’s perceived reliability. One expert describes struggling with an otherwise tech-savvy elderly relative who consistently believes AI-generated content about Sudan’s war circulated through WhatsApp. The content’s visual verisimilitude, combined with trusted distribution channels, makes debunking extraordinarily difficult.
Professor Roland Meyer, a media and visual culture scholar, has identified a disturbing trend of “AI-generated images of white, blond families presented by neofascist online accounts as models of a desirable future.” He attributes this pattern not just to current political currents but to the structurally conservative nature of generative AI itself. Since these systems train on pre-existing data, which research shows contains inherent biases against ethnic diversity, progressive gender roles, and non-traditional sexual orientations, they tend to reinforce and concentrate conservative social norms in their outputs.
The same pattern appears in “trad wife” content—idealized portrayals of submissive homemakers that create a retrograde fantasy world. Many social media platforms now showcase what amounts to “clothed nonsexual pornography,” with AI-generated images of women described as “comely, fertile and submissive.” These portrayals package white supremacy, autocracy, and natural hierarchies of race and gender as nostalgic yearnings for an imagined past, leading some critics to describe AI as “the new aesthetic of fascism.”
Most AI slop, however, isn’t driven by coherent ideological agendas but by engagement economics. Exaggerated content drives shares and comments, creating monetization opportunities for creators. Journalist Max Read discovered that Facebook AI content isn’t considered “junk” by the platform but rather “precisely what the company wants: highly engaging content.” For social media giants, engagement is paramount—the cheaper the content production and the less human labor involved, the better.
The cumulative effect of constant exposure to AI imagery—from nonsensical to soothing to ideological—fundamentally alters how we process visual information. Real-world atrocities like deportations, detentions, and war casualties enter the same content stream as material that defies physical and moral laws, creating profound disorientation. The result is a paradoxical state where everything feels simultaneously too real and entirely unreal.
This visual confusion, combined with the attention economy’s inherent tendency toward trivialization, creates what one critic calls “a grand circus of excess.” Even serious content is presented as entertainment or visual elevator music. Political scandals generate AI renderings of officials as giant babies; stress triggers algorithmically recommended images of snowy cabins with roaring fires.
As algorithms rapidly adapt to user behavior, media consumption becomes increasingly difficult to curate, immersing users in subjective realities rather than objective truth. The resulting disjuncture blunts the sense of urgency that our crisis-filled world should inspire. We risk sleepwalking into disaster—not through ignorance, but through the paralyzing effect of experiencing events through this perverse ecosystem, where everything becomes just another component of an overwhelming visual spectacle.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
This is a concerning trend that could have serious implications for democracy and global stability. AI-generated content has the potential to spread misinformation and distort public perceptions in worrying ways.
Agreed, we need robust safeguards and regulations to ensure AI is used responsibly and not exploited for political or ideological gain.
This is a worrying development that deserves close scrutiny. The potential for AI to distort our understanding of reality and sway public opinion in harmful ways is alarming. Robust safeguards and ethical guidelines will be essential.
I’m curious to learn more about the specific types of AI-generated content that are causing concern, and how they differ from authentic visual media. What are the key characteristics that make this content problematic?
That’s a good question. The article mentions ‘AI slop’ – low-quality, minimally human-input content – as well as ‘idealized portrayals’ and ‘political fantasy’ material. Understanding these distinctions is crucial.
This is a complex issue without easy solutions. On one hand, AI can be a powerful tool for creating engaging content. But the potential for abuse and distortion of reality is clearly very concerning. Rigorous oversight will be essential going forward.
The emergence of right-wing political fantasy content generated by AI is particularly worrying. We’ve seen how misinformation can spread rapidly online and sow division. Robust fact-checking and media literacy efforts will be crucial.
Absolutely. Fact-checking and media literacy education need to be a priority to combat the spread of this kind of manipulative content.
As AI capabilities continue to advance, we’ll need to grapple with increasingly sophisticated ‘deepfake’ technology and its potential to undermine trust in media and institutions. Vigilance and proactive solutions will be essential.
I appreciate the in-depth examination of this issue in the article. The blurring of lines between authentic and AI-generated content is a complex challenge that will require collaborative efforts between technologists, policymakers, and the public.
Agreed, a multifaceted approach bringing together diverse stakeholders will be key to addressing the risks posed by AI-driven disinformation.