Listen to the article

0:00
0:00

In a concerning development highlighting the growing sophistication of AI-generated disinformation, a fake audio recording purportedly capturing President Donald Trump discussing Jeffrey Epstein files has been circulating widely across social media platforms in recent weeks.

The fabricated recording, which first appeared in early November 2025, features what sounds like Trump’s voice angrily instructing White House staffers to prevent the release of documents related to convicted sex offender Jeffrey Epstein. In the counterfeit clip, the voice resembling Trump can be heard saying, “[We’re] not releasing the Epstein files! F Marjorie Taylor Green. I don’t care what you do. Start a fing war. Just don’t let ’em get out. If I go down, I will bring all of you down with me.”

The recording gained significant traction on Facebook, Instagram, and TikTok, with some versions incorporating visuals allegedly showing the White House and staff members to enhance the appearance of authenticity.

Digital forensics experts have confirmed that both the audio and visuals were completely fabricated using OpenAI’s Sora 2, an artificial intelligence tool released on September 30, 2025. This latest version of Sora represents a notable advancement in AI technology, as it introduced audio capabilities to the platform’s video generation tools for the first time.

The source of the deceptive content has been traced to TikTok user @fresh_florida_air, whose account contains numerous AI-generated videos created with Sora 2. The specific video containing the fake Trump audio was posted on November 16 and carries watermarks for both the TikTok account and Sora 2 user @bradbradt31.

When contacted by fact-checkers, the person behind the @fresh_florida_air account acknowledged the artificial nature of the content, stating: “To clarify, both the visuals and the audio in those videos are fully AI-generated creative elements. They’re fictional concepts created solely for artistic experimentation and social commentary — not real footage, not real recordings, and not representations of actual events or statements. My intent is creative expression, not presenting anything as factual.”

The same account previously posted a similar AI-generated video on November 5 featuring fabricated audio of Trump saying, “I don’t f***ing care how long this shutdown lasts. We will not lose to the Democrats. We will not release the Epstein files. I don’t care if the entire country starves.” That clip abruptly ended with an incomplete sentence about SNAP benefits.

Other content from the account includes various AI-generated scenarios featuring individuals wearing Trump’s signature “Make America Great Again” hats, including one bizarre clip showing shirtless men in MAGA hats drinking alcohol while boiling in a pot over a campfire.

The emergence of these sophisticated fake recordings comes at a particularly sensitive time in American politics and raises significant concerns about the potential for AI to be weaponized in disinformation campaigns. As AI tools become increasingly accessible and their outputs more convincing, distinguishing between authentic and fabricated media becomes increasingly challenging for average social media users.

Media literacy experts emphasize the importance of verifying information through multiple reliable sources, particularly when encountering explosive or inflammatory content on social media. They also recommend checking for visual inconsistencies and unusual audio patterns that often appear in AI-generated media, even as the technology continues to improve.

This incident follows several other recent cases of AI-generated content falsely depicting political figures, including a previously debunked video allegedly showing Trump singing a religious song while playing piano on the White House North Lawn.

OpenAI has not commented specifically on this misuse of their technology, though the company has previously stated its commitment to developing safeguards against harmful applications of its AI tools.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. This is a prime example of how AI can be exploited to spread misinformation. It’s a good reminder to always verify claims, especially those that seem sensational or politically charged.

  2. Interesting to see how advanced AI-generated disinformation has become. It’s important to fact-check these types of stories carefully before believing or spreading them.

  3. William Williams on

    Wow, the level of sophistication in this AI-generated disinformation is really impressive. But it’s a concerning trend that we need to be vigilant about.

  4. The use of AI to create these types of false recordings is worrying. It’s crucial that people learn to identify and avoid falling for such manipulative disinformation campaigns.

  5. Fabricated audio and visuals to spread false narratives – this is certainly a concerning development that highlights the need for increased digital media literacy.

    • Absolutely, we must be vigilant in verifying information from credible sources to avoid being misled by sophisticated AI-powered fakes.

  6. Isabella Jackson on

    The ability of AI to generate realistic-sounding audio and visuals is quite remarkable, but it’s worrying to see it being used to create false narratives. Rigorous fact-checking is a must.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.