Listen to the article
A digitally fabricated video depicting Immigration and Customs Enforcement (ICE) agents pursuing a man dressed as a Viking through city streets has gone viral on social media platforms, prompting officials to issue clarifications about its authenticity.
The video, which began circulating widely last week, shows what appears to be several uniformed ICE officers chasing a bearded man wearing traditional Norse attire complete with a horned helmet through urban neighborhoods. The clip has amassed millions of views across platforms including TikTok, Twitter, and Facebook, where many users shared it as genuine footage.
Digital forensics experts who analyzed the video identified multiple telltale signs of artificial intelligence generation, including inconsistent lighting, unnatural movements, and periodic visual glitches characteristic of current AI video synthesis technology.
“This is a textbook example of increasingly sophisticated AI-generated content designed to provoke emotional responses,” explained Dr. Melissa Tanner, director of the Digital Media Authentication Center at Northwestern University. “The creator incorporated specific details like authentic-looking ICE uniforms and recognizable urban settings to enhance perceived legitimacy.”
ICE spokesperson Carlos Reyes issued a statement yesterday confirming the video’s falsity: “This footage does not depict any actual ICE operation. It appears to be entirely computer-generated and does not represent the agency’s activities or protocols.”
The video emerges amid increasing concerns about deepfakes and AI-generated content influencing public discourse on immigration enforcement, a politically sensitive topic in the United States. Immigration policy experts note that such fabricated content can complicate public understanding of actual enforcement practices.
“When misinformation like this spreads virally, it creates confusion about legitimate immigration enforcement actions,” said Rebecca Alvarez, immigration policy analyst at the Brookings Institution. “These fabrications make it more difficult for the public to distinguish fact from fiction regarding real immigration operations.”
Social media platforms have implemented varying responses to the video. Meta, parent company of Facebook and Instagram, has added warning labels to shared versions of the clip. Twitter has removed some instances while applying content notices to others. TikTok representatives indicated they are reviewing the content against their synthetic media policies.
This incident highlights the growing challenge of AI-generated content that mimics real-world scenarios with increasing sophistication. According to a recent Pew Research Center report, approximately 67% of Americans express concern about their ability to distinguish authentic videos from AI-generated content, with that figure rising sharply over the past two years.
“The technology for creating convincing fake videos has outpaced most people’s ability to identify them,” said Dr. James Wilson, professor of media studies at Georgetown University. “Just a year ago, artifacts in AI-generated videos were much more obvious to the untrained eye. That gap is closing rapidly.”
Digital literacy advocates are using this viral incident as an educational opportunity. The Media Literacy Project has created a guide specifically addressing how to identify potential AI-generated videos, pointing to inconsistencies in physics, unnatural lighting transitions, and facial anomalies as potential indicators.
Law enforcement agencies have expressed growing concern about the potential for such fabricated content to damage public trust or incite real-world confrontations. Several police departments have issued advisories urging the public to verify information through official channels before reacting to dramatic videos depicting purported law enforcement activities.
The creator of the video remains unknown, though digital forensics investigators are attempting to trace its origin. Creating and distributing such content may not necessarily violate laws, as legal frameworks regarding AI-generated media remain underdeveloped in many jurisdictions.
As detection technology struggles to keep pace with generation capabilities, media experts suggest this case represents what will likely become an increasingly common phenomenon requiring enhanced critical media consumption skills across the general public.
“This won’t be the last viral AI-generated video that causes confusion,” noted Alvarez. “The public needs to approach dramatic footage with appropriate skepticism, especially when it aligns perfectly with existing political narratives or seems designed to provoke strong emotional responses.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


19 Comments
Silver leverage is strong here; beta cuts both ways though.
Good point. Watching costs and grades closely.
If AISC keeps dropping, this becomes investable for me.
Good point. Watching costs and grades closely.
If AISC keeps dropping, this becomes investable for me.
Good point. Watching costs and grades closely.
Nice to see insider buying—usually a good signal in this space.
Production mix shifting toward Fact Check might help margins if metals stay firm.
Good point. Watching costs and grades closely.
Production mix shifting toward Fact Check might help margins if metals stay firm.
Good point. Watching costs and grades closely.
The cost guidance is better than expected. If they deliver, the stock could rerate.
The cost guidance is better than expected. If they deliver, the stock could rerate.
Nice to see insider buying—usually a good signal in this space.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Silver leverage is strong here; beta cuts both ways though.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.