Listen to the article
AI-Generated Anti-ICE Videos Flood Social Media Platforms, Blurring Reality and Fantasy
A wave of AI-generated videos depicting confrontations between people of color and Immigration and Customs Enforcement (ICE) agents has swept across Meta platforms, amassing millions of views and igniting debate about the boundaries between digital activism and harmful misinformation.
The phenomenon exploded following the January 7 killing of Renee Nicole Good, an unarmed 37-year-old mother of three who was shot during a federal operation in Minneapolis. Since then, thousands of fabricated videos showing resistance against ICE agents have proliferated on Instagram and Facebook, creating what experts describe as “digital fan fiction” of accountability.
One account alone, operated by someone using the name Mike Wayne, has uploaded more than 1,000 such clips according to a Wired investigation. The videos portray scenarios ranging from a principal wielding a baseball bat to block agents from entering her school to drag queens chasing officers through the streets of Saint Paul. One of Wayne’s most viral creations—showing ICE agents brawling with white tailgaters at a sporting event—garnered 11 million views within just 72 hours.
“The oppressed have always built what they could not find,” filmmaker Willonious Hatcher told Wired, describing the AI videos as “diagnosis” rather than delusion. “A people doesn’t dream this loudly of fighting back unless they’ve learned that the systems meant to protect them will not.”
However, experts warn that the catharsis these videos provide comes with serious consequences. Nicholas Arter, founder of AI creative consultancy “AI for the Culture,” notes a troubling dynamic: while some creators express genuine political resistance, others chase virality or monetization by exploiting emotionally charged content.
The timing is particularly problematic. Beyond Good’s killing, ICE also shot Alex Pretti, a 37-year-old Veterans Affairs ICU nurse who was recording on his phone when agents gunned him down. Video evidence has been crucial in disputing government narratives about both deaths—Good’s partner captured footage seconds before she was killed.
Joshua Tucker, codirector of New York University’s Center for Social Media, AI, and Politics, worries about undermining trust in authentic footage. “There’s concern that this could contribute to a general perception that you just can’t trust videos when you see them anymore,” Tucker explained, making it “harder to convince people that things which are actually real are, in fact, real.”
This fear materialized Wednesday when The News Movement posted authentic footage of Pretti confronting ICE officers on January 13, more than a week before his death. Commenters on Instagram and YouTube immediately accused the video of being AI-generated, forcing Pretti’s family to confirm its authenticity to the New York Times.
The erosion of trust runs in both directions. The Trump administration has deployed similar AI manipulations for political purposes—last week, the White House posted an altered photo of civil rights attorney Nekima Levy Armstrong after her arrest at a peaceful demonstration, labeling her a “far-left agitator.”
The scale of AI’s infiltration into online discourse is staggering. A 2024 Graphite study found that more than 50 percent of new web articles are now AI-generated, while Survey Monkey analysis shows 73 percent of marketers use AI for personalized content.
Arter highlights another risk: most videos depict people of color confronting authority figures. At a time when protesters face “domestic terrorist” labels from officials, these AI creations could provide justification for further crackdowns. “The real danger lies not just in the content itself, but in how it’s interpreted and acted upon,” Arter warns.
Comment sections under these videos reveal a fundamental tension. “This is fake. ICE can’t run,” wrote one viewer under a clip of officers fleeing drag queens. Another responded: “Love it. Don’t care if it’s ‘fake,’ want to see it inspire.” The division illustrates how some viewers crave emotional release, while others worry about where inspiration ends and dangerous delusion begins.
These AI videos exist in a liminal space between art, activism, and algorithmically optimized engagement. They offer what Arter calls “revisionist justice”—imagining a digital multiverse where federal agents are held accountable. But in a country where authentic video evidence of law enforcement violence repeatedly fails to produce meaningful change, the popularity of these fantasy scenarios reveals a profound breakdown in institutional trust.
As both resistance movements and government entities deploy AI manipulations, objective reality becomes increasingly difficult to discern—leaving Americans unable to distinguish authentic documentation from algorithmically generated wish fulfillment at precisely the moment when that distinction matters most.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
The proliferation of AI-generated anti-ICE videos on social media platforms is a complex issue. While digital activism can be valuable, the potential for misinformation to spread is worrying. Thoughtful regulation and user education will be needed to navigate these challenges.
I agree, the line between digital activism and misinformation is becoming increasingly blurred. Responsible use of AI and robust fact-checking will be essential to ensure these technologies are not exploited to sway public opinion.
The flood of AI-generated anti-ICE videos is concerning, as it can distort public understanding of important issues. While creative, the potential for these technologies to be misused is clear. Careful oversight and transparency from platforms will be crucial going forward.
Interesting to see AI-generated content being used to spark social debates. While creative, it’s concerning if the videos are blurring fact and fiction. Responsible use of AI is important, especially on sensitive topics like immigration enforcement.
I agree, the potential for AI to spread misinformation is worrying. Fact-checking and transparency around the source of these videos will be crucial to prevent harmful narratives from taking hold.
The flood of AI-generated anti-ICE videos on social media raises valid concerns about the spread of misinformation. It’s a challenging issue as technology advances, and we need to find the right balance between digital activism and verifiable facts.
Absolutely. AI can be a powerful tool, but it must be used responsibly and with clear safeguards to prevent manipulation of public discourse, especially on sensitive topics. Diligent moderation and source verification will be key.
This highlights the double-edged nature of AI-generated content. While it can be used to raise awareness, the risk of creating ‘digital fan fiction’ and blurring reality is concerning. Careful oversight and transparency will be crucial as these technologies evolve.