Listen to the article
The image purporting to show Florida Attorney General Pam Bondi at age 14 with Donald Trump at a Jeffrey Epstein party has been conclusively debunked as an AI-generated deepfake, according to fact-checkers.
Lead Stories, a fact-checking organization, identified the image as a fabrication with 99.9% certainty based on analysis from the Hive Moderation AI-generated content detection tool. The image was shared on a Turning Point USA-affiliated Facebook page on February 7, 2026, with text claiming: “Pam Bondi at 14, She is now Trump’s head of the DoJ. She is an Epstein party baby! Just sickening.. how deep does this go!”
The claim collapses under basic scrutiny of biographical and historical facts. Bondi, born in 1965, would have been well into adulthood during the period when Trump and Epstein’s friendship began in the mid-1980s. During this time, Bondi was attending the University of Florida, not attending parties as a teenager.
This isn’t the first time this particular image has circulated with false claims. Lead Stories previously identified the same image in June 2023 on Twitter (now X), where it was described as showing “Trump dancing with a 13-year-old girl” on Epstein’s private island. The current version has been cropped to remove some of the more obvious signs of manipulation that were visible in the original fabrication.
The 2023 version of the image contained telltale signs of AI generation that were common in early deepfakes. Upon closer inspection, several bizarre anomalies are evident: the woman behind Trump appears to have “an enormous ear as a head,” while a man in the background has an anatomically impossible extra finger on the hand holding a beer bottle.
“Artificial Intelligence tools for creating false images have evolved tremendously in the 32 months since this image was created,” noted Lead Stories in their analysis. “Unlike AI images made in 2026, deepfakes from 2023 had defects that fact checkers could usually spot with the naked eye.”
The advancement in AI image generation technology presents growing challenges for verifying visual content online. Earlier versions of deepfakes often contained obvious errors in human anatomy or physical inconsistencies that made them easier to identify. However, newer AI tools have become increasingly sophisticated at producing realistic-looking fabrications with fewer detectable flaws.
This incident highlights the ongoing problem of manipulated media being used to spread misinformation about public figures, particularly in political contexts. As the presidential election cycle intensifies, falsified images and videos are increasingly being deployed to create false narratives or damage reputations.
Media literacy experts recommend several strategies for identifying potentially AI-generated images, including looking for unusual lighting, anatomical inconsistencies, and blurred or distorted backgrounds. When encountering suspicious images on social media, users are encouraged to reverse image search the content and consult reputable fact-checking organizations before sharing.
The circulation of this fabricated image also demonstrates how previously debunked content often resurfaces with new contextual claims, targeting different public figures while using the same underlying fake imagery. This recycling of debunked content presents ongoing challenges for fact-checkers and information integrity professionals.
For legitimate news and information about public figures like Pam Bondi, media consumers are advised to rely on established news organizations with journalistic standards rather than unverified social media accounts.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


7 Comments
While it’s troubling to see such blatantly false claims being made, I’m encouraged by the swift action taken to debunk this image. Fact-checking and media literacy are essential tools in the fight against disinformation. We must remain vigilant and hold ourselves and others accountable for the information we share.
It’s good that this image was quickly identified as a deepfake. We need to be vigilant about the rise of AI-generated content that can be used to mislead and spread disinformation. Rigorous fact-checking is essential to maintain public trust and ensure the integrity of our discourse.
It’s concerning to see these kinds of manipulated images being shared, even if they are later debunked. We need to be very cautious about the information we encounter online, and always fact-check claims, especially those involving public figures or high-profile events.
Absolutely. Fact-checking and verifying the authenticity of media is crucial in today’s digital landscape, where misinformation can spread so rapidly. I’m glad to see this particular case was thoroughly investigated and the truth was uncovered.
This is a concerning example of how easily manipulated imagery can be used to spread false narratives. I’m glad the fact-checkers were able to definitively identify this as an AI-generated fake. We must remain diligent in verifying claims, especially those involving public figures or sensitive topics.
Absolutely. The proliferation of deepfakes and other AI-generated content is a serious threat to the integrity of information. Careful scrutiny and reliance on reputable sources are crucial to combat the spread of misinformation.
This is a troubling case of disinformation. I’m glad the facts were quickly established and the false image debunked. Spreading unsubstantiated claims, especially about public figures, can have serious consequences. It’s important we remain vigilant and rely on credible sources when evaluating such sensitive information.