Listen to the article
In a stark example of artificial intelligence’s ability to spread misinformation, a viral video purportedly showing dead bodies moving on their own in a morgue has been debunked as a sophisticated AI fabrication.
The video, which circulated widely after being posted on X (formerly Twitter) by an account named “@MemeKingc” on November 16, 2025, featured four separate scenes of apparent corpses moving their legs while female attendants sat nearby completing paperwork. The post was captioned simply: “Bodies Move on Their Own?!”
Digital forensic analysis of the footage revealed numerous telltale signs of AI generation. Expert examination identified multiple visual inconsistencies typical of artificial intelligence-created content, particularly that produced by advanced video generation models.
Among the most obvious flaws were distorted timestamps containing illegible characters, contradictory wall clocks showing different times within the same room, and architectural anomalies including inconsistent door designs and irregularly shaped freezer shelves. The morgue equipment displayed physical impossibilities, such as gurneys with mismatched legs – some bent, others straight – and inconsistent lengths.
Other revealing glitches included a sink lacking a drain opening, a staff member apparently writing on a floating piece of paper not supported by any surface, and a corpse’s foot that bizarrely developed an additional tag as it moved.
Hive, a specialized AI detection platform, analyzed the footage and delivered a definitive assessment, determining with 99.9% confidence that the content contained “AI-generated or deepfake content.” The analysis specifically pointed to OpenAI’s Sora, an advanced text-to-video generation model, as the most likely tool used to create the fabricated scenes.
This incident highlights growing concerns among digital security experts about the increasing sophistication of AI-generated media. As video generation technology continues to advance, the ability to create convincing fake footage poses significant challenges for information integrity online.
The morgue video represents a particularly troubling category of misinformation that plays on common fears and superstitions. Content suggesting paranormal or unexplained phenomena can spread rapidly through social media platforms, often outpacing fact-checking efforts.
Digital literacy advocates emphasize the importance of scrutinizing unusual video content for inconsistencies like those present in this footage. Common indicators of AI generation include visual glitches in backgrounds, impossible physics, distorted text elements, and anatomical irregularities that may not be immediately obvious to casual viewers.
Social media platforms continue to refine their approaches to identifying and labeling potentially AI-generated content. However, the detection technology remains in a perpetual arms race with increasingly sophisticated generation tools.
This incident also underscores the ethical questions surrounding AI video creation. While tools like OpenAI’s Sora offer remarkable creative possibilities, their potential for misuse in creating convincing hoaxes presents significant societal challenges.
For consumers of online media, experts recommend approaching viral videos with healthy skepticism, particularly those depicting implausible or sensational events. Taking time to examine details, checking reliable sources, and using available AI detection tools can help individuals avoid being misled by sophisticated digital fabrications.
As AI video generation becomes more advanced and accessible, the ability to distinguish between authentic and synthetic media will become an increasingly essential digital literacy skill for navigating an information landscape where seeing can no longer be reliably equated with believing.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
I’m curious to learn more about the specific AI models and techniques used to create this deceptive video. Understanding the technical details could help develop better safeguards against such manipulations in the future.
That’s a great point. Transparency around the AI systems involved is key so we can assess vulnerabilities and develop effective mitigation strategies.
As unsettling as this incident is, it underscores the importance of critical thinking and fact-checking, especially when it comes to sensational or emotionally charged content online. We must all remain vigilant against the spread of misinformation.
Absolutely. Developing media literacy skills to identify potential AI-generated fakes is crucial for everyone in this digital age.
Wow, that’s really concerning. I hope this AI malfunction issue can be resolved quickly to prevent the spread of more misinformation. It’s crucial that we have reliable, fact-based news we can trust, especially on sensitive topics like this.
Agreed, the implications of AI-generated fake footage are quite alarming. Rigorous testing and oversight are needed to ensure these systems are secure and cannot be exploited to mislead the public.
This is a sobering reminder of the need for robust regulations and industry standards to ensure AI technologies are developed and deployed responsibly. Safeguarding the public from malicious misuse should be a top priority.
Well said. Policymakers and tech companies must work together to establish clear guidelines and accountability measures around the use of AI, especially in sensitive domains like media and communications.
I’m curious to know what specific AI techniques were used to create this fake footage, and whether similar methods could be applied to manipulate other types of media. Understanding the technical details could help develop more effective countermeasures.