Listen to the article
In early October 2025, a video claiming to show an Immigration and Customs Enforcement (ICE) agent accidentally pepper-spraying himself in the face spread widely across social media platforms, drawing thousands of reactions from users amid heightened tensions over immigration enforcement.
The 10-second clip, which appeared to capture an embarrassing moment for federal law enforcement, showed a uniformed agent discharging pepper spray near protesters before the aerosol apparently blew back into his face. In the video, a voice off-camera could be heard saying, “Watch the wind, man,” followed by laughter and the comment, “Karma’s quick, dude.”
The timing of the video’s circulation coincided with the Trump administration’s controversial efforts to deploy National Guard troops to major cities including Portland and Chicago, as part of intensified immigration enforcement operations. These deployments have faced significant legal challenges, with an Illinois judge recently issuing a two-week block on National Guard presence in Chicago.
However, a closer examination reveals the video is entirely artificial—created using OpenAI’s Sora 2 technology, which was released just days before the video began circulating. This new version of the text-to-video AI tool includes synchronized audio capabilities for the first time, enabling creators to generate realistic-looking videos with convincing sound.
The fake nature of the content is evident through several telltale signs. Most notably, the video bears a watermark from a TikTok account with “AI” in its handle, consistent with OpenAI’s stated policy that all outputs from its platform “carry a visible watermark” to identify AI-generated content.
Technical inconsistencies further expose the video’s artificial origins. The pepper spray appears to emanate from both of the agent’s hands, despite the canister being held only in his right hand. When the agent raises his left hand, it’s clearly empty. Additionally, the agent’s facial movements don’t synchronize naturally with his dialogue when he responds, “I’m good. I’m good.”
The background elements also betray the video’s AI creation. Nonsensical text reading “OF DEI LBNM EPARTMENT” appears on a wall behind the agent—a common flaw in AI-generated media, as these systems often struggle to produce coherent text within visual outputs.
This fake ICE video represents part of a growing trend of AI-generated content flooding social media platforms since the release of sophisticated tools like Sora 2. Media experts have coined the term “AI slop” to describe this wave of convincing but fabricated content that can rapidly spread misinformation, particularly around politically charged topics like immigration enforcement.
The video emerged amid real tensions between federal immigration authorities and communities across the country. In a separate, authenticated incident earlier this month, federal immigration agents were documented using pepper pellets against a Chicago pastor during enforcement operations, highlighting the genuine concerns that make fabricated content particularly problematic.
The circulation of such convincing fake videos poses significant challenges for media consumers. As AI tools become increasingly sophisticated, the line between authentic documentation and fabricated content continues to blur, requiring heightened vigilance from viewers.
This incident follows a pattern of AI-generated content targeting emotionally charged political issues. Just days before this ICE video began circulating, another fake video created with Sora showed a protester being pepper-sprayed by a soldier who referred to himself as “Sergeant Pepper”—an equally fabricated scenario designed to provoke reactions.
As the 2025 election season intensifies and immigration remains a divisive political issue, media literacy experts warn that distinguishing between genuine documentation of law enforcement actions and AI-generated provocations will become increasingly crucial for informed public discourse.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
The use of AI-generated media to spread disinformation is troubling. While technology advances, we must be vigilant in separating fact from fiction, especially around politically-charged issues.
Absolutely. The ability to create convincing fake videos is a double-edged sword that requires careful monitoring and response.
While I’m glad the truth came out, this incident shows how easily misinformation can spread in today’s digital landscape. We must stay vigilant and rely on authoritative, fact-based sources.
I’m curious to learn more about the Sora 2 technology used to create this fake video. What are the implications of such advanced AI-based media generation tools?
Good point. The rapid development of these technologies raises concerns about their potential misuse for disinformation campaigns and propaganda. Responsible oversight will be crucial.
This is a concerning development. The ability to create convincing fake videos has serious implications for how we consume and share information online. We must be more discerning and critical in our media consumption.
It’s disappointing to see the use of AI-generated content to mislead the public. Maintaining trust in media and institutions is vital, especially on sensitive topics like immigration enforcement.
I agree. The proliferation of synthetic media poses a real challenge to transparency and accountability. Rigorous fact-checking and media literacy efforts will be essential going forward.
This highlights the importance of critical thinking and fact-checking in the digital age. It’s easy for manipulated content to go viral, so we must be discerning consumers of online information.
Interesting to see this video revealed as fake. It’s important to verify claims, especially on sensitive topics like immigration enforcement. I wonder what the motivations were behind creating this doctored footage?
Agreed. Fabricated content can spread misinformation and inflame tensions. I’m glad the true origins of this video were uncovered.