Listen to the article
White House Sparks Alarm with Manipulated Image of Civil Rights Attorney
The Trump administration has drawn criticism for its growing use of artificial intelligence and manipulated imagery on official government channels, most recently sharing an altered photo depicting civil rights attorney Nekima Levy Armstrong with fabricated tears following her arrest.
The incident began when Homeland Security Secretary Kristi Noem’s account posted the original arrest image. Shortly after, the official White House account shared a doctored version showing Levy Armstrong crying—an emotionally manipulative edit that never occurred in reality.
This manipulated image is part of a larger pattern emerging since the fatal shootings of Renee Good and Alex Pretti by U.S. Border Patrol in Minneapolis. Rather than addressing mounting concerns, White House officials have doubled down on their approach, with deputy communications director Kaelan Dorr declaring on social media that “memes will continue.” Meanwhile, Deputy Press Secretary Abigail Jackson shared posts mocking those expressing concern about the practice.
“Calling the altered image a meme certainly seems like an attempt to cast it as a joke or humorous post, like their prior cartoons,” said David Rand, professor of information science at Cornell University. “This presumably aims to shield them from criticism for posting manipulated media.” Rand noted that unlike previous cartoonish images shared by the administration, the purpose behind sharing this altered arrest photo seems “much more ambiguous.”
The White House appears to be leveraging AI-enhanced imagery specifically to engage with Trump supporters who are highly active online. Zach Henry, a Republican communications consultant who founded the influencer marketing firm Total Virality, explained the strategy: “People who are terminally online will see it and instantly recognize it as a meme. Your grandparents may see it and not understand the meme, but because it looks real, it leads them to ask their kids or grandkids about it.”
This approach intentionally courts controversy to increase visibility. “All the better if it prompts a fierce reaction, which helps it go viral,” Henry added, generally praising the White House social media team’s approach.
The manipulation of visual information, especially when distributed through official government channels, presents serious concerns for information integrity. Michael A. Spikes, professor at Northwestern University and news media literacy researcher, explained that such altered images “crystallize an idea of what’s happening, instead of showing what is actually happening.”
“The government should be a place where you can trust the information, where you can say it’s accurate, because they have a responsibility to do so,” Spikes continued. “By sharing this kind of content, and creating this kind of content… it is eroding the trust we should have in our federal government to give us accurate, verified information. It’s a real loss, and it really worries me a lot.”
The situation extends beyond just the White House. Following the shooting of Renee Good by an ICE officer, numerous AI-generated videos began circulating on social media depicting fictional confrontations with immigration authorities. These fabricated clips often show civilians confronting or evading ICE officers—content that Jeremy Carrasco, a media literacy specialist, describes as a form of “fan fiction” that plays into viewers’ existing biases.
“Most viewers can’t tell if what they’re watching is fake,” Carrasco warned, questioning whether the public would know “what’s real or not when it actually matters, like when the stakes are a lot higher.”
UCLA professor Ramesh Srinivasan, who hosts the Utopias podcast, highlighted the broader implications: “AI systems are only going to exacerbate, amplify and accelerate these problems of an absence of trust, an absence of even understanding what might be considered reality or truth or evidence.”
Experts believe this trend will only accelerate, particularly as the technology becomes more sophisticated. While solutions like digital watermarking systems are being developed by groups like the Coalition for Content Provenance and Authenticity, widespread implementation remains at least a year away.
“It’s going to be an issue forever now,” Carrasco concluded. “I don’t think people understand how bad this is.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
The use of manipulated imagery by government officials is very troubling. Spreading falsehoods, even under the guise of ‘memes,’ is unethical and damages the public’s trust. We need our leaders to uphold transparency and facts, not peddle disinformation.
Absolutely. The White House should be setting an example of integrity, not sinking to the level of mocking those who are rightfully concerned about this issue.
While technology can be a powerful tool, using AI to generate false or misleading images is a serious abuse of that power. The White House should be held accountable for undermining public trust and the democratic process.
I agree. Mocking those who raise valid concerns about this issue is unacceptable. Our leaders should be working to strengthen, not erode, faith in our institutions.
This is a concerning development that merits serious scrutiny. The public deserves honest, fact-based information from their government, not manipulated images designed to mislead. I hope there will be a thorough investigation into these practices.
This is really concerning. Using AI to generate and manipulate images for political purposes is a dangerous path. We need to have an open and honest public discourse, not misinformation and propaganda.
I agree, the White House should not be spreading doctored images that misrepresent reality. This undermines trust in our institutions and the democratic process.
This is a deeply troubling development. The use of AI-generated misinformation by government officials is a clear threat to our democratic values and the free exchange of ideas. We must demand better from our leaders.
I’m troubled by the White House’s apparent embrace of AI-generated misinformation. While new technologies offer many benefits, they can also be abused for political gain. We must remain vigilant against the erosion of truth and accountability.