Listen to the article
White House’s Use of Altered Images Raises Concerns Over Trust and Misinformation
The Trump administration has drawn criticism for sharing doctored imagery on official White House channels, most recently featuring civil rights attorney Nekima Levy Armstrong depicted in tears following her arrest. The incident represents an escalation in the White House’s embrace of AI-enhanced visuals that has alarmed misinformation experts.
The controversy began when Homeland Security Secretary Kristi Noem’s account posted an unedited arrest image of Levy Armstrong. Shortly after, the official White House account shared an altered version showing her crying—a manipulation that wasn’t labeled as such. This doctored image is part of a growing trend of AI-edited visuals being shared in political contexts, particularly since the fatal shootings of Renee Good and Alex Pretti by U.S. Border Patrol in Minneapolis.
When faced with criticism about the altered image, White House officials didn’t back down. Deputy communications director Kaelan Dorr declared on X that the “memes will continue,” while Deputy Press Secretary Abigail Jackson shared a post mocking the criticism.
“Calling the altered image a meme certainly seems like an attempt to cast it as a joke or humorous post, like their prior cartoons,” said David Rand, professor of information science at Cornell University. “This presumably aims to shield them from criticism for posting manipulated media.” Rand added that the purpose of sharing this particular altered image seems “much more ambiguous” than previous cartoonish images circulated by the administration.
The strategic value of such content is clear to political communication professionals. “People who are terminally online will see it and instantly recognize it as a meme,” explained Zach Henry, a Republican communications consultant who founded influencer marketing firm Total Virality. “Your grandparents may see it and not understand the meme, but because it looks real, it leads them to ask their kids or grandkids about it.”
This approach seems designed to provoke reactions that help content go viral. Henry, who generally praised the White House social media team’s work, noted that controversy often increases visibility.
Media literacy experts, however, are deeply troubled by government entities participating in visual misinformation. “The creation and dissemination of altered images, especially when they are shared by credible sources, crystallizes an idea of what’s happening, instead of showing what is actually happening,” said Michael A. Spikes, professor at Northwestern University and news media literacy researcher.
“The government should be a place where you can trust the information, where you can say it’s accurate, because they have a responsibility to do so,” Spikes added. “By sharing this kind of content, and creating this kind of content… it is eroding the trust we should have in our federal government to give us accurate, verified information.”
The implications extend beyond this single incident. An influx of AI-generated videos related to Immigration and Customs Enforcement actions, protests, and citizen interactions has proliferated across social media. After Renee Good was shot by an ICE officer while in her car, several fabricated videos began circulating showing women driving away from ICE officers. Other synthetic videos show immigration raids or people confronting ICE officers.
Jeremy Carrasco, a content creator specializing in media literacy and debunking viral AI videos, believes most of these videos come from accounts “engagement farming”—capitalizing on clicks using popular keywords like “ICE.” He warns that most viewers can’t distinguish between real and fake content, even when there are obvious signs of AI generation such as gibberish street signs.
“It’s going to be an issue forever now,” Carrasco said. “I don’t think people understand how bad this is.”
The problem extends to international news as well. Following the capture of deposed Venezuelan leader Nicolás Maduro, fabricated and misrepresented images flooded online platforms.
Ramesh Srinivasan, a professor at UCLA, emphasized that many people already struggle to find “trustable information” sources. “AI systems are only going to exacerbate, amplify and accelerate these problems of an absence of trust, an absence of even understanding what might be considered reality or truth or evidence,” he said.
Potential solutions may include watermarking systems that embed information about media origins into metadata. The Coalition for Content Provenance and Authenticity has developed such a system, but widespread adoption remains at least a year away, according to Carrasco.
As AI-generated political content becomes more commonplace, the boundaries between fact and fiction continue to blur, creating what experts describe as a crisis of trust in information across society.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


14 Comments
This episode highlights the need for robust guardrails around AI use in political contexts. Altered visuals, even if intended as ‘memes’, can spread misinformation and undermine democratic discourse.
Absolutely. The White House should set a higher standard and lead by example when it comes to responsible use of new technologies like AI. Transparency is key.
The White House’s embrace of AI-altered imagery is concerning. While technological advances offer new creative possibilities, they also risk eroding public trust if not handled carefully and transparently.
Well said. Proper labeling and disclosure of any AI-generated or edited content is critical to maintain the integrity of official communications.
The use of AI-edited visuals in political contexts is a troubling development. Manipulating images, even as ‘memes’, undermines the public’s ability to discern fact from fiction. Transparent labeling is a must.
I agree completely. Doctored imagery, regardless of intent, has no place in official government communications. The risks of misinformation are simply too high.
Sharing AI-generated or edited images without clear disclosure is irresponsible, especially from official government channels. The public deserves access to factual, unaltered information to form their own views.
Well said. Transparency and integrity in communication should be non-negotiable, even in the age of advanced technologies like AI. The stakes are too high to compromise.
The White House’s embrace of AI-edited visuals is a worrying trend. Manipulating images without clear disclosure erodes faith in information sources. Factual, unaltered imagery is essential for an informed public.
Exactly. Doctored or AI-generated images shouldn’t be shared as if they are real, especially by official government channels. That’s a dangerous path towards misinformation.
Concerning use of AI-generated images for political messaging. While new tech offers creative possibilities, it also raises risks of misinformation if not handled transparently. Curious to see how this evolves as AI capabilities advance.
Agreed, the lack of transparency around these altered images is very troubling. Proper labeling and disclosure is crucial to maintain public trust.
The White House’s use of AI-altered visuals raises serious concerns about misinformation and the erosion of public trust. While new technologies offer creative potential, proper labeling and disclosure are essential to maintain integrity.
Agreed. The responsible use of AI in official communications requires a careful, transparent approach. Anything less risks undermining the credibility of the source and the information itself.