Listen to the article
Spanish parliament raid video exposed as AI-generated hoax
A viral video claiming to show Spanish soldiers storming the country’s Congress of Deputies and arresting lawmakers for corruption has been confirmed as an artificial intelligence fabrication, according to fact-checkers.
The deceptive footage, which has circulated widely across social media platforms, depicts armed men bursting into a parliamentary-style chamber, pointing weapons at men in suits amid shouting in Spanish. Various captions accompanying the video claim it shows “Civil Guard and the Spanish Army” arresting “corrupt lawmakers” in Spain’s lower house.
AAP FactCheck investigation revealed the video was created using OpenAI’s Sora 2 video generation technology. In earlier versions posted on TikTok, the distinctive Sora 2 watermark is clearly visible – a safeguard OpenAI implements on all AI-generated content. However, in versions circulating on Facebook and other platforms, these watermarks have been deliberately obscured using emoji overlays to hide their artificial origin.
The original TikTok post featuring the video claimed to show events in Peru, not Spain, with a Spanish caption that translates to: “COURAGE AND HONOR!!! #worthy the armed forces that represent their people #what is needed in Peru to end the Fujimori-Montesinista dictatorship… currently in the Congress of the Republic.”
Multiple visual inconsistencies betray the video’s artificial creation. The interior depicted matches neither Spain’s Congress of Deputies nor Peru’s Congress, as verified through comparison with Getty Images and Reuters photos. The uniforms worn by the supposed soldiers don’t correspond to Spain’s Civil Guard, which features a distinctive yellow logo with a sword, crown, and bundle of rods with an axe.
Technical analysis reveals classic signs of AI generation, including deliberately blurry resolution to mask details, unnatural blending of figures (a man in a suit and an armed man appear to merge together at one point), and unusual visual artifacts. In the TikTok version, the upper balcony appears to “flow” unnaturally around the room, with a strange, scribbly effect visible on the balcony stairs.
Despite the dramatic scenario depicted, no credible media organizations have reported any such military intervention in either Spain or Peru’s legislative bodies.
The spread of this fabricated content comes amid growing concerns about AI-generated disinformation. Major publications including The Guardian, Time magazine, and The New York Times have criticized tools like Sora 2 for their potential to accelerate the proliferation of misleading “AI slop” across social media platforms.
This incident highlights the evolving challenges in distinguishing authentic news footage from sophisticated AI fabrications. While OpenAI implements watermarks on all Sora-generated videos, the deliberate concealment of these markers demonstrates how easily such safeguards can be circumvented.
Social media users sharing the content have amplified its reach with inflammatory commentary, with one account adding: “TIME FOR THE UK and Australia TO FOLLOW SUIT….” – suggesting support for similar military actions against elected officials in other democratic nations.
The fabricated video serves as a stark reminder of the increasing sophistication of AI-generated content and the need for heightened digital literacy among social media users to identify and verify potentially misleading information before sharing.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


10 Comments
This is a sobering reminder of the potential for AI to be misused to create convincing but false content. While the technology itself is neutral, the malicious intent behind obscuring the Sora 2 watermarks is worrying. Fact-checking is so crucial in the digital age.
This is a concerning development, but not entirely surprising given the rapid progress in AI-powered video synthesis. The deliberate effort to obscure the Sora 2 watermarks is particularly troubling. Fact-checking will be crucial in the fight against deepfakes and other synthetic media.
This is a good example of how AI-generated content can be misused to spread disinformation. Glad the original TikTok post was debunked, but it’s concerning how the watermarks were obscured in later versions to hide the synthetic origins.
Absolutely. The ease with which AI can create realistic-looking yet completely fabricated footage is a real challenge for media literacy and trust in information online.
The ability of AI to generate fake videos that can spread misinformation is a real challenge. I’m glad the fact-checkers were able to expose this particular hoax, but it highlights the need for increased media literacy and critical thinking around online content.
You’re absolutely right. As AI continues to advance, we’ll need to be ever more discerning about the veracity of visual media we encounter online.
Curious to know more about the specific AI technology used to generate this fake video. The fact that it was created with OpenAI’s Sora 2 is an interesting detail. I wonder what other capabilities this kind of video generation AI has.
Good point. The rapid advancement of AI in areas like video synthesis is certainly concerning from a disinformation standpoint. We’ll need to stay vigilant as these technologies become more sophisticated.
Interesting to see this AI-generated hoax video making the rounds. Fact-checking is so important these days to distinguish real footage from synthetic media. I wonder what the motivations were behind creating this deceptive content?
You’re right, it’s crucial to be vigilant about verifying the authenticity of online videos these days. Kudos to the fact-checkers for exposing this as an AI fabrication.