Listen to the article
The growing debate over former President Donald Trump’s increasingly sophisticated use of artificial intelligence in political messaging has sparked concerns among digital ethics experts and political analysts. What once might have been dismissed as social media antics has evolved into what critics describe as a sophisticated propaganda operation designed to manipulate public perception.
Since late 2022, Trump has shared at least 62 AI-generated images and videos on his Truth Social platform, according to reporting by The New York Times. These posts frequently depict the former president in grandiose, fictional scenarios—from piloting fighter jets to appearing as “Pope Trump” or “Trump the Conqueror.”
Martha Joynt Kumar, director of the White House Transition Project, noted the escalation in Trump’s digital strategy. “In his first administration, he used Twitter in a way no president had,” Kumar told PBS. “What they do in this administration is taking it further, as you’ve had an increase in what can be done online.”
The controversy intensified recently when Trump shared an AI-generated video depicting himself as a fighter pilot dropping excrement on protesters at a fictional “No Kings” rally. The post immediately drew widespread criticism for its tasteless content and the precedent it sets for official political communication.
House Speaker Mike Johnson defended the post as “satire,” claiming Trump was “using humor to make a point.” However, digital ethics experts view such content as a dangerous normalization of manufactured hate materials in political discourse.
“The use of memes and the use of what we used to consider stuff that would exist in the worlds of Reddit, now has drifted into the discourse of elected leaders,” said Bret Schafer, senior fellow at the Alliance for Securing Democracy, in comments to Time magazine. “I don’t think it is good for our kind of political discourse in this country to adopt the online style of podcasters, vloggers, and partisan communicators.”
Perhaps most concerning to civil rights organizations is how the technology has been deployed to reinforce racist narratives and target political opponents. One AI-generated video depicted House Minority Leader Hakeem Jeffries wearing a fake mustache and sombrero—content that was quickly denounced as “openly racist” by multiple advocacy groups. In another instance, manipulated video used AI voice replacement to make Senate Minority Leader Chuck Schumer appear to mock his own party.
AI expert Henry Ajder, founder of Latent Space Advisory, points out that this phenomenon extends beyond American politics. “Trump is the most notable person sharing this content, but this is really becoming an international, new form of political messaging,” Ajder explained. “It’s designed to go viral, it’s clearly fake, it’s got this absurdist kind of tone to it. But there’s often still some kind of messaging in there.”
The viral nature of such content—particularly its divisiveness—serves as a powerful amplification mechanism. Adrian Shahbaz, vice president of research and analysis at Freedom House, noted: “The more ridiculous the photo or video, the more likely it is to dominate our news feeds. A controversial post gets shared by people who enjoyed it and people outraged by it. That’s twice the shares.”
The controversy expanded to federal agencies last week when the Department of Homeland Security, led by Secretary Kristi Noem, posted what appeared to be an AI-altered image of young Black men allegedly “threatening ICE.” The post was quickly debunked by the original content creator, who revealed the original video had been taken out of context and manipulated.
Technical developments have accelerated these capabilities. Tools like OpenAI’s Sora 2 and Elon Musk’s Grok have made creating realistic deepfakes easier than ever before. “It’s better quality, but better quality for really bad use cases,” warned Ben Coleman, CEO of deepfake detection firm Reality Defender. “Generative AI and deepfakes are accelerating misinformation, scams, and attacks on elected officials, minorities, and women.”
The White House has characterized Trump’s use of AI imagery as part of a deliberate communication strategy. White House assistant press secretary Liz Huston defended the approach, stating: “No leader has used social media to communicate directly with the American people more creatively and effectively than President Trump.”
Digital ethics experts warn that the constant blurring of truth and fiction erodes public trust in all information sources. Even when manipulations are obvious, the volume of manufactured content creates what Shahbaz calls “a fog of digital confusion where truth becomes optional.”
The regulatory response to this emerging challenge has been limited. Critics point to Trump’s administration favoring private-sector innovation by technology companies over accountability and oversight, with executive orders focused more on preventing “woke” influence in AI development rather than addressing consumer protection and misinformation concerns.
As advanced AI tools continue to proliferate, the boundary between authentic and manufactured reality becomes increasingly difficult for average citizens to discern—a development that poses significant challenges for democratic discourse and informed civic participation.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


14 Comments
While advanced technologies can enhance communication, the use of AI to create fictional political imagery is deeply concerning. It’s a slippery slope that undermines the integrity of our democratic process. Strict guidelines and oversight are necessary to prevent further erosion of public trust.
Absolutely. Maintaining the integrity of political discourse should be a top priority as these technologies continue to evolve. Responsible regulation and public awareness will be key to addressing this challenge.
This news highlights the need for robust media literacy education. Empowering the public to critically evaluate online content, regardless of its source, will be crucial in combating the spread of disinformation and political manipulation.
The growing use of AI-generated content in political messaging is deeply concerning. It’s crucial that we develop clear guidelines and regulations to ensure these powerful technologies are not misused to undermine truth and erode public trust.
This is a troubling development. The use of AI to create grandiose, fictional images and videos for political gain undermines truth and erodes public trust. We need stronger safeguards and accountability measures around the use of these technologies in the political sphere.
I’m curious to learn more about the specific techniques and algorithms being used to generate this AI content. A deeper understanding of the technology could help inform policies and best practices to address the challenges it presents.
Agreed, a more technical analysis of the AI systems involved would be valuable. Transparency around the capabilities and limitations of these tools is essential for developing effective regulatory frameworks.
Interesting to see how AI is being deployed in political messaging. While advanced technology can enhance communication, it also raises ethical concerns around transparency and potential manipulation. We’ll have to watch how this evolves and ensure responsible use of these powerful tools.
I agree, the use of AI in political messaging is a double-edged sword. Increased transparency and oversight will be crucial to maintain democratic integrity.
This is a troubling development that highlights the need for greater scrutiny and accountability around the use of AI in political messaging. The public deserves transparency and factual information, not algorithmically generated propaganda.
The rise of AI-generated political content is certainly concerning. It’s critical that the public is aware of when information is algorithmically created versus coming from human sources. Fact-checking and media literacy will be key to navigating this landscape.
Well said. The line between authentic and manipulated content is blurring, so it’s vital that people develop the skills to identify AI-generated propaganda.
The increasing use of AI in political communication is a complex issue with significant implications for democratic processes. While technology can enhance engagement, we must ensure it is not exploited to spread disinformation or manipulate public opinion.
Well said. Striking the right balance between technological innovation and safeguarding democratic norms will be a key challenge moving forward.