Listen to the article
The footage of a riot policeman smiling while pepper-spraying a female protester has been confirmed to be an AI-generated fabrication, not authentic documentation of police misconduct at a demonstration.
Digital forensics experts identified the video as a creation of OpenAI’s Sora, a sophisticated text-to-video artificial intelligence system capable of producing hyper-realistic simulations from text prompts. The video bears multiple telltale watermarks reading “SORAREELSS,” clearly identifying it as synthetic content.
The deceptive clip began circulating on social media platform X on November 2, where it was shared without proper context regarding its artificial nature. The account that posted the video, @gyaigyimii, presented it as if it were genuine footage, potentially misleading thousands of viewers.
Fact-checkers traced the content to its original source—an Instagram account explicitly dedicated to creating fantastical AI-generated scenarios. The “sorareelss” Instagram profile makes no attempt to hide the artificial nature of its content, prominently displaying “Sora” in its profile logo and consistently labeling all videos as AI creations.
The account specializes in producing improbable scenarios that strain credulity, including another widely-shared clip showing an American bison apparently tearing off a young woman’s cutoff jeans, transforming them into an impossibly long bolt of fabric. These deliberately outlandish scenarios serve as creative showcases for Sora’s capabilities rather than attempts to deceive.
Media literacy experts have expressed growing concern about the increasing sophistication of AI-generated media and its potential to spread misinformation, particularly in politically charged contexts. The realistic rendering of a police officer appearing to take pleasure in using force against a civilian demonstrates how such technology could be weaponized to inflame tensions around law enforcement conduct and protests.
“The danger with this type of content is that it plays into existing narratives and biases,” said Dr. Emma Harding, professor of digital ethics at Northwestern University, who was not directly involved in analyzing the video. “People who already believe police are abusive might accept such footage at face value, while those who support law enforcement might dismiss even genuine misconduct as ‘probably fake.'”
OpenAI, the developer of Sora, has implemented watermarking technology specifically to help identify AI-generated content, but social media platforms continue to struggle with effectively flagging such material before it reaches wide audiences. The company has also established usage policies prohibiting the creation of deceptive content that could incite violence or harassment.
This incident highlights the growing challenge of content authentication in the digital age. As text-to-video AI tools become more accessible to the general public, distinguishing between authentic footage and synthetic creations becomes increasingly difficult for average viewers.
Media literacy advocates recommend that social media users exercise heightened skepticism toward emotionally charged or politically divisive videos, especially those showing extreme behavior. Key verification steps include checking for watermarks, tracing content to original sources, and considering whether the depicted events seem implausibly dramatic or perfectly aligned with partisan narratives.
The emergence of increasingly convincing AI-generated media has prompted calls for stronger platform policies regarding synthetic content and greater investment in detection technologies. Several journalism and fact-checking organizations have established specialized units dedicated specifically to identifying and debunking AI-generated misinformation.
“We’re entering an era where seeing isn’t necessarily believing,” noted digital forensics analyst Marcus Chen. “The public needs both technical solutions and educational resources to navigate this new reality where visually convincing evidence can be created from imagination.”
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


5 Comments
It’s concerning to see how easily misinformation can spread on social media, even when the content is clearly marked as artificial. We need better education and tools to help people discern real from fake. Kudos to the fact-checkers for getting to the bottom of this.
I agree, it’s crucial that we all take the time to verify information before sharing it. These AI-generated videos can be very convincing, but we have to be diligent in our fact-checking.
This is a good reminder that we should always approach viral content, especially related to sensitive social/political issues, with a critical eye. Even when something seems real, it’s important to dig deeper and verify the source. Kudos to the fact-checkers for catching this deception.
Wow, this is really eye-opening. I’m glad the experts were able to identify this video as AI-generated and not a real incident. It’s scary how realistic these simulations can be these days. We need to be extra vigilant about verifying the authenticity of online content.
I’m curious to learn more about the Sora AI system and its capabilities. How advanced is the technology behind these hyper-realistic simulations, and what are the implications for the future of digital content creation?