Listen to the article
The Trump administration’s use of digitally manipulated imagery has sparked fresh concerns over truth in government communications, as officials increasingly share AI-generated or edited content through official White House channels.
A recent incident involving a doctored image of civil rights attorney Nekima Levy Armstrong has intensified the debate. After Homeland Security Secretary Kristi Noem’s account posted an authentic image of Levy Armstrong’s arrest, the official White House account shared an altered version showing her crying – a manipulation that was never acknowledged as such by the administration.
When criticized for sharing the edited image, White House officials defended their actions. Deputy Communications Director Kaelan Dorr wrote on X that the “memes will continue,” while Deputy Press Secretary Abigail Jackson posted content mocking those who expressed concern about the practice.
The altered image emerged amid a flood of AI-edited content circulating online following the fatal shootings of Renee Good and Alex Pretti by U.S. Border Patrol officers in Minneapolis. While the administration has previously shared cartoonish AI-generated images, misinformation experts view this more realistic editing as particularly troubling.
“Calling the altered image a meme certainly seems like an attempt to cast it as a joke or humorous post, like their prior cartoons. This presumably aims to shield them from criticism for posting manipulated media,” said David Rand, a professor of information science at Cornell University. He noted that unlike the administration’s previous cartoonish images, the purpose of sharing the altered arrest photo seems “much more ambiguous.”
Republican communications consultant Zach Henry, founder of influencer marketing firm Total Virality, suggests the content is strategically designed to engage different segments of Trump’s base. “People who are terminally online will see it and instantly recognize it as a meme,” Henry explained. “Your grandparents may see it and not understand the meme, but because it looks real, it leads them to ask their kids or grandkids about it.”
The viral nature of controversial content is part of the appeal, according to Henry, who generally praised the White House’s social media approach.
Michael A. Spikes, a Northwestern University professor and news media literacy researcher, warns that altered images shared by official sources “crystallizes an idea of what’s happening, instead of showing what is actually happening.”
“The government should be a place where you can trust the information, where you can say it’s accurate, because they have a responsibility to do so,” Spikes said. “By sharing this kind of content, and creating this kind of content… it is eroding the trust we should have in our federal government to give us accurate, verified information.”
The practice comes at a time when public trust in institutions is already fragile. Ramesh Srinivasan, a UCLA professor, said many people are questioning where to find “trustable information.”
“AI systems are only going to exacerbate, amplify and accelerate these problems of an absence of trust, an absence of even understanding what might be considered reality or truth or evidence,” Srinivasan said. He warned that when officials share unlabeled synthetic content, it normalizes the practice for others in positions of power.
The immigration enforcement context has proven particularly fertile ground for AI-generated misinformation. After Renee Good was shot by an ICE officer, numerous AI-generated videos began circulating showing fictional encounters with immigration officials, including confrontations where citizens allegedly yelled at or threw food at officers.
Jeremy Carrasco, a content creator specializing in media literacy, believes most of these videos come from accounts “engagement farming” – capitalizing on trending topics to generate views. However, he notes that most viewers likely can’t distinguish between real and fabricated content, even when the AI generation contains obvious errors.
The issue extends beyond immigration. Fabricated images following the capture of deposed Venezuelan leader Nicolás Maduro recently flooded social media platforms.
Carrasco suggests implementing a watermarking system that embeds information about media origins into metadata could help address the problem. While the Coalition for Content Provenance and Authenticity has developed such a system, Carrasco believes widespread adoption remains at least a year away.
“It’s going to be an issue forever now,” he said. “I don’t think people understand how bad this is.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
This is concerning. The use of manipulated imagery by government officials undermines public trust. They should be transparent about the use of AI and digital editing, rather than trying to deceive the public.
I agree. Sharing false or misleading content, even under the guise of ‘memes’, is unacceptable from our elected leaders. They have a responsibility to provide accurate information.
I’m curious to hear more about the specific policies and practices the administration has in place to ensure the responsible use of AI and digital editing. Maintaining public trust should be their top concern.
That’s a great question. Given the serious implications, the administration should provide clear guidelines and oversight on the use of these technologies in official communications.
This incident highlights the need for robust safeguards against the misuse of emerging technologies like AI in government communications. Transparency and fact-based reporting should be the priority, not misleading ‘memes’.
The administration’s dismissive response to concerns about the edited image is deeply troubling. Disregarding the importance of truth and accuracy in government communications is a worrying sign.
The administration’s defense of these practices is troubling. Editing images to distort the truth is a dangerous road that erodes democratic institutions. Misinformation experts are right to sound the alarm on this issue.
Absolutely. The proliferation of AI-generated and edited content is a serious challenge that requires clear policies and accountability. The public deserves honesty from their government, not manipulated propaganda.