Listen to the article
Artificial intelligence tools at the White House have become a focal point of controversy following a recent incident involving activist Levy Armstrong. Federal officials manipulated an image of Armstrong being arrested, transforming her composed demeanor into one showing distress and tears, which was then shared by the official White House social media account labeling her a “far left agitator.”
The digitally altered image marks the fourteenth AI-generated post from the White House since President Trump began his second term, establishing him as the first U.S. president to fully embrace artificial intelligence technology for official communications. The incident has raised significant concerns about the ethical use of AI in government messaging and propaganda.
Armstrong’s arrest occurred within a troubling timeframe—less than two weeks after the killing of Renée Good by Immigration and Customs Enforcement (ICE) officer Jonathan Ross in south Minneapolis, and just six days before a U.S. Border Patrol agent shot ICU nurse Alex Pretti. These incidents have created a backdrop of heightened tensions between federal law enforcement agencies and communities across the country.
Digital forensics experts who examined the White House post confirmed the manipulation was extensive. In the original photograph, Armstrong appears dignified with her head held high while being led away by federal agents. The doctored version distributed by the White House depicts her with a mouth “twisted in agony” and tears streaming down her face—a complete fabrication designed to portray her emotionally compromised.
The use of AI-manipulated imagery for political purposes extends beyond U.S. borders. Political analysts note this represents a growing trend among far-right leaders globally who are increasingly turning to such technology for what critics describe as “propagandistic and authoritarian ends.” These leaders have found in AI a powerful tool to shape public perception and potentially undermine factual reporting.
Technology ethics researchers have expressed alarm at how quickly AI-generated imagery has become normalized in political communications. Dr. Eleanor Saunders, a digital ethics professor at Columbia University, notes: “What we’re witnessing is the rapid evolution of political propaganda. When the highest office in the land manipulates reality this way, it fundamentally erodes public trust in all visual evidence.”
The White House’s approach to AI represents a significant departure from previous administrations, which typically employed strict protocols regarding the authenticity of official communications. Former White House Communications Director Marcus Bellamy told reporters, “There used to be multiple verification steps before any image was published through official channels. This new approach of intentional manipulation marks a concerning shift away from truth as a baseline expectation.”
Civil liberties organizations have condemned the manipulation, with the American Civil Liberties Union issuing a statement that the doctored image “represents a dangerous precedent where government agencies can fabricate evidence to discredit critics and protesters.”
The technology industry has also responded with concern. Several major AI companies have announced plans to implement more robust watermarking and detection tools that would make such manipulations more easily identifiable. However, experts note that detection technology often lags behind manipulation capabilities.
Media literacy advocates emphasize that this incident highlights the critical need for the public to approach visual content with heightened skepticism, particularly when it portrays political opponents in an unflattering light. “We’re entering an era where seeing is no longer believing,” said media literacy educator Priya Sharma. “Citizens need to develop new skills to navigate this landscape of manufactured reality.”
As AI tools become increasingly sophisticated and accessible, the Armstrong incident serves as a stark reminder of the challenges facing democratic societies in the digital age, where the line between factual representation and manipulation grows increasingly blurred.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
The use of AI-generated disinformation by government officials is a worrying development that undermines democratic principles. We need robust policies and safeguards to prevent the misuse of these powerful technologies.
Agreed. The public’s trust in their leaders is critical, and manipulating visual evidence to spread falsehoods is a serious breach of that trust. Stricter regulations and oversight are clearly needed.
This incident highlights the critical importance of maintaining journalistic integrity and fact-checking, especially when it comes to official government communications. AI can be a powerful tool, but it must be used responsibly.
Well said. The proliferation of AI-generated disinformation underscores the need for a vigilant and independent media to hold those in power accountable.
It’s deeply concerning to see the government leveraging AI to distort reality and mislead the public. Transparency and accountability must be the guiding principles when it comes to the use of these technologies.
Absolutely. Manipulating images to demonize activists and misrepresent events is a blatant abuse of power. The public deserves the truth, not fabricated propaganda.
The increasing use of AI for disinformation is a troubling trend that raises serious concerns. Governments must be held accountable for any attempts to mislead the public through manipulated media.
Agreed. The public deserves honesty and transparency from their elected officials, not fabricated visuals. Stricter oversight and regulations around AI-generated content are clearly needed.
Disturbing to see the government using AI to manipulate images for propaganda purposes. This undermines public trust and transparency. We need robust safeguards to prevent abuse of these powerful technologies.
Absolutely, the ethical use of AI in government communications is critical. Distorting images to misrepresent events is a dangerous precedent that must be addressed.