Listen to the article

0:00
0:00

In a move raising serious concerns among misinformation experts, the Trump administration has embraced AI-generated and edited imagery on official White House communication channels, blurring the line between fact and fiction in government messaging.

The latest controversy centers on a digitally altered image showing civil rights attorney Nekima Levy Armstrong in tears following her arrest. The original arrest photo was first posted by Homeland Security Secretary Kristi Noem before the official White House account shared the manipulated version depicting her crying—a version that never existed in reality.

When faced with criticism, White House officials doubled down rather than apologizing. Deputy Communications Director Kaelan Dorr declared on social media platform X that the “memes will continue,” while Deputy Press Secretary Abigail Jackson mockingly dismissed critics of the practice.

This doctored image is part of a growing pattern of AI-altered content shared by official channels since the fatal shootings of Renee Good and Alex Pretti by U.S. Border Patrol in Minneapolis. The White House has increasingly distributed cartoon-like visuals and memes through its official platforms.

David Rand, professor of information science at Cornell University, sees this strategy as deliberate. “Calling the altered image a meme certainly seems like an attempt to cast it as a joke or humorous post, like their prior cartoons,” he explained. “This presumably aims to shield them from criticism for posting manipulated media.”

Republican communications consultant Zach Henry, founder of influencer marketing firm Total Virality, suggests this approach targets different audiences simultaneously. “People who are terminally online will see it and instantly recognize it as a meme,” he said. “Your grandparents may see it and not understand the meme, but because it looks real, it leads them to ask their kids or grandkids about it.” Henry noted that fierce reactions help content go viral, generally praising the White House social media team’s effectiveness.

However, media literacy experts warn of dangerous consequences. Michael A. Spikes, professor at Northwestern University, expressed alarm that manipulated images “crystallize an idea of what’s happening, instead of showing what is actually happening.”

“The government should be a place where you can trust the information, where you can say it’s accurate, because they have a responsibility to do so,” Spikes emphasized. “By sharing this kind of content, it is eroding the trust we should have in our federal government to give us accurate, verified information. It’s a real loss, and it really worries me a lot.”

The trend extends beyond White House communications. An influx of AI-generated videos related to Immigration and Customs Enforcement actions has proliferated on social media platforms. Following the shooting of Renee Good by an ICE officer, numerous fabricated videos began circulating showing supposed confrontations between civilians and immigration officers.

Jeremy Carrasco, a content creator specializing in media literacy and debunking AI videos, believes many of these videos come from accounts “engagement farming”—capitalizing on popular keywords to generate clicks and views. More concerning is that viewers often cannot distinguish between authentic and synthetic content.

“I don’t think people understand how bad this is,” Carrasco warned. “It’s going to be an issue forever now.”

UCLA professor Ramesh Srinivasan, who hosts the Utopias podcast, highlighted the broader implications: “AI systems are only going to exacerbate, amplify and accelerate these problems of an absence of trust, an absence of even understanding what might be considered reality or truth or evidence.”

Srinivasan fears that official government channels sharing AI-generated content not only normalizes the practice for ordinary citizens but also grants permission to other policymakers and authority figures to distribute unlabeled synthetic content. Combined with social media algorithms that prioritize extreme and conspiratorial content, which AI can produce effortlessly, society faces profound challenges in maintaining a shared understanding of reality.

Experts like Carrasco advocate for watermarking systems that embed origin information into media metadata as a potential partial solution. The Coalition for Content Provenance and Authenticity has developed such technology, though widespread adoption remains at least a year away.

As manipulated content becomes increasingly sophisticated and ubiquitous, the fundamental question remains whether citizens will know “what’s real or not when it actually matters, like when the stakes are a lot higher.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

12 Comments

  1. Ava B. Rodriguez on

    This is a concerning trend that could seriously undermine public trust in government institutions. The White House should reconsider this practice and commit to sharing only genuine, unaltered content.

    • Absolutely. Manipulated imagery, even if intended as memes, has no place in official government communications. Transparency and truthfulness must be the priority.

  2. While the use of memes and visuals can be an engaging way to communicate, the White House needs to be extremely careful about verifying the authenticity of any imagery used. Doctored content has no place in official government channels.

    • Noah Rodriguez on

      I agree. The White House should focus on factual, transparent communication, not the distribution of potentially misleading AI-altered images.

  3. Jennifer Taylor on

    Concerning if the White House is sharing manipulated imagery. We need transparency and accountability, not misinformation. I hope they address this properly rather than dismissing valid concerns.

    • I agree, the use of AI-altered images by government officials is very troubling. It undermines public trust and can spread disinformation.

  4. Michael Jackson on

    While the use of visual content can be an effective communication tool, the White House must be vigilant in ensuring the authenticity of any imagery shared. Doctored or AI-generated content is unacceptable and could dangerously spread misinformation.

    • I agree, the White House should be setting a high standard of integrity and truthfulness, not engaging in the dissemination of potentially misleading visuals.

  5. Isabella Martin on

    The use of AI-altered imagery by the White House is extremely concerning and raises serious questions about the integrity of their communications. They need to address this issue head-on and commit to only sharing genuine, unmanipulated content going forward.

    • Elizabeth Miller on

      Absolutely. Transparency and truthfulness must be the top priorities for any government institution, especially when it comes to official communications and messaging.

  6. Patricia Brown on

    The use of AI-generated and edited imagery in official government communications is highly problematic. It blurs the line between fact and fiction, which is incredibly dangerous.

    • Absolutely. The White House should be setting an example of truthfulness and integrity, not engaging in the spread of manipulated visuals.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.