Listen to the article
The viral video showing what appears to be former President Barack Obama discussing Donald Trump’s health has sparked confusion across social media platforms, with many viewers questioning its authenticity amid growing concerns about misinformation ahead of the presidential election.
In the hour-long clip, which has circulated widely on Facebook and X (formerly Twitter), a figure resembling Obama delivers what seems to be a somber warning to Americans about former President Trump’s supposed “cognitive lapses,” “confusion,” and “physical instability,” describing these as “red flags” indicating serious decline.
However, the video is entirely artificial. It was created using sophisticated AI technology to simulate Obama’s voice, appearance, and speaking style. The content originated from a YouTube channel called Hope In Motion, which posted the fabricated address on November 11. The channel, which has amassed over 62,000 subscribers, specializes in political commentary delivered through AI-generated content.
The creators included a disclaimer with the original upload clearly stating: “All visuals and voices are produced by us and do not feature or portray Barack Obama himself. Our aim is to cover political developments in a way that fosters civic understanding… We adhere to fair-use guidelines and journalistic integrity.”
This incident highlights the growing sophistication of AI-generated deepfakes, which can now create convincing videos of public figures saying things they never actually said. The technology combines visual deepfake techniques with synthetic voice generation to produce content that can be difficult for average viewers to distinguish from authentic recordings.
Media literacy experts have expressed concern about the potential impact of such videos during an election year. Dr. Claire Wardle, co-founder of the Information Futures Lab at Brown University, noted in a recent interview with CNN that “deepfakes are evolving faster than detection technology, creating a perfect storm for election misinformation.”
The Hope In Motion channel describes itself as producing “long-form political explainers using AI-driven storytelling” with the stated goal of making complex political issues more accessible. Their disclaimer explains that they use “voiceover narration and dramatised reenactments to simplify complex political news” with the intention of making political content more engaging rather than spreading false information.
This isn’t the first AI-generated political content to cause confusion. Earlier this year, several manipulated videos of candidates from both parties circulated online, prompting calls from legislators for greater regulation of AI-generated political content.
Social media platforms have struggled to contain the spread of such deepfakes. While many platforms have policies against manipulated media, enforcement has proven challenging, particularly when content crosses multiple platforms or when users share clips without the original disclaimers.
The Federal Election Commission is currently considering new rules that would require clear disclosures on AI-generated political advertising, though these regulations would not necessarily apply to content like the Obama deepfake if it’s not classified as campaign advertising.
For voters, the incident serves as a reminder to verify information from official sources, particularly when content makes dramatic or sensational claims. Media literacy advocates recommend checking whether content comes from verified accounts, looking for unusual visual artifacts or audio inconsistencies, and confirming news through multiple reputable sources.
As AI technology continues to advance, distinguishing between authentic and fabricated content will likely become an increasingly critical skill for informed civic participation, especially during election cycles when misinformation tends to proliferate.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

