Listen to the article
In a move that highlights the ongoing challenges with AI-generated content, a manipulated image claiming to show former President Donald Trump using a walker has been confirmed as fake. The image, which was circulated on social media platform X (formerly Twitter), carries the telltale signs of artificial intelligence creation.
The doctored photo appeared on December 11, 2025, posted by user @keithedwards with the caption: “BREAKING: an image has leaked showing Trump using a walker moments after he signed an executive order banning states from regulating AI.” The timing of the post appeared deliberately ironic, coming shortly after Trump signed an executive order limiting state regulation of artificial intelligence tools.
Analysis by Google’s Gemini AI tool confirmed the presence of Google’s SynthID watermark in the image. This digital fingerprint was specifically designed by Google to help identify AI-generated content and combat the spread of deepfakes that could potentially mislead the public.
“Yes, this image contains the SynthID watermark. The identification tool detected that most or all of the content was edited or generated with Google AI,” stated the Gemini analysis when asked to examine the photo in question.
The SynthID system represents an increasingly important safeguard in a digital landscape where distinguishing between authentic and artificially generated content has become more difficult. Google developed this watermarking technology as part of broader industry efforts to create transparency around AI-generated content, allowing fact-checkers and other verification specialists to quickly identify synthetic media.
The circulation of this fake image underscores the growing concern about AI’s potential to create and spread misinformation, particularly in politically charged contexts. As AI-generated content becomes more sophisticated and widespread, the ability to quickly verify authenticity becomes increasingly crucial for maintaining public trust in visual media.
Fact-checking organizations like Lead Stories have incorporated tools such as the SynthID Detector into their verification processes. These tools allow them to analyze suspicious content and determine whether it was created using AI technology.
The incident highlights the dual-edged nature of AI regulation that Trump’s executive order addressed. While the order aimed to limit state-level restrictions on AI development, it comes at a time when concerns about AI misuse are growing. The irony of using AI to create a false image of Trump immediately after he signed an order limiting AI regulation was not lost on social media users.
The fake image also raises questions about the effectiveness of current safeguards against AI misinformation. While the SynthID watermark proved valuable in this case, many AI-generated images lack such identifiers, particularly those created using platforms that don’t incorporate watermarking or similar verification systems.
As the 2026 election cycle approaches, political analysts warn that AI-generated content could play an increasingly significant role in disinformation campaigns. The sophistication of these tools continues to advance, making detection potentially more difficult without robust verification systems.
For users encountering questionable images online, experts recommend looking for visual inconsistencies, checking trusted news sources for verification, and using available AI detection tools before sharing content that could potentially spread misinformation.
This incident serves as a timely reminder of the ongoing challenges facing regulators, technology companies, and the public as AI-generated content becomes more prevalent and convincing in our digital information ecosystem.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

