Listen to the article
In November 2025, a viral video purporting to show a sanitation worker rescuing an abandoned newborn from a garbage truck compactor was revealed to be entirely fabricated using artificial intelligence. The deceptive content, which garnered approximately 29 million views, sparked widespread concern among social media users who believed they were witnessing a genuine rescue.
The 10-second clip, shared by the Facebook page “Dailystories,” showed what appeared to be a sanitation worker named Samuel discovering a crying infant moments before operating the truck’s compactor. Accompanying text dramatically narrated the supposed rescue, describing how Samuel’s partner heard a cry and prevented a tragedy.
“He was just about to pull the lever on the compactor. Then he heard a cry that didn’t belong,” the post began, before elaborating on the fictional worker’s 25 years of experience in sanitation.
The fabricated story rapidly spread across multiple platforms, including Instagram, TikTok, X (formerly Twitter), and YouTube, with many users responding with genuine emotional reactions to what they believed was an authentic rescue.
Fact-checkers at Snopes identified numerous telltale signs of AI generation in the video. The unusually short duration—just 10 seconds—aligns with the typical limitations of current text-to-video AI tools. Close examination revealed several visual inconsistencies, including the infant’s right hand appearing to lack a thumb until about four seconds into the clip.
Other indicators of AI fabrication included illegible text on the garbage truck’s labels despite their visibility in the frame, and a white or silver car in the background that appeared to be partially merged with adjacent buildings—a common artifact in AI-generated imagery where objects incorrectly intersect.
The video also lacked essential contextual details such as location, date, or any identifying information that would typically accompany a legitimate news story of this nature. Had such a dramatic rescue actually occurred, it would have generated substantial coverage from established news outlets, similar to a verified incident in April 2025 when a garbage collector in Rio de Janeiro discovered an abandoned baby near a dumpster.
Further investigation revealed that the Dailystories Facebook page regularly publishes similarly fabricated AI-generated content, including multiple variations of baby rescue scenarios. The page’s linked YouTube channel contains videos that carry platform warnings for “altered or synthetic content.”
The spread of this misinformation was compounded by search engine AI assistants. When users searched for verification of the story, both Bing and Yahoo’s AI features incorrectly confirmed the story’s authenticity, citing as their source an advertisement-filled WordPress blog containing Vietnamese text that itself appeared to be AI-generated.
This incident highlights a troubling digital misinformation cycle, where AI-generated content creates false narratives that are then validated by other AI systems, creating a feedback loop that can mislead users seeking accurate information.
The proliferation of such convincing yet entirely fabricated content presents growing challenges for media literacy and fact-checking efforts. It illustrates how emotional narratives involving vulnerable subjects like abandoned babies can easily manipulate viewers into sharing unverified content, potentially diverting attention and resources from genuine social issues.
As AI-generated media becomes increasingly sophisticated, detecting such fabrications will require greater vigilance from both platforms and users to prevent the spread of compelling but entirely fictional stories presented as authentic news events.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

