Listen to the article
In a striking display of digital misinformation’s evolving threat, AI-generated images falsely depicting Representative Ilhan Omar posing with her attacker flooded social media platforms less than 24 hours after President Donald Trump suggested she had staged an attack on herself.
The manipulated photos, which appeared across X and Facebook, showed Omar smiling alongside the man who sprayed apple cider vinegar on her during a town hall meeting. Some variations depicted the congresswoman handing cash to the attacker, implying a bribed conspiracy. While these crude fabrications were easily debunked—one simply superimposed Omar’s image over another woman in the attacker’s genuine Facebook photo—they nonetheless achieved their intended effect.
A false narrative quickly took hold in conservative circles that Omar had orchestrated the incident. Even when viewers recognized the images as fake, the content succeeded in creating enough uncertainty to discourage further investigation into the truth—a phenomenon experts identify as deliberate.
“The sustained experience of living through disinformation changes people’s capacity to participate meaningfully in democratic life,” explains Dmytro Iarovyi, an associate professor at the Kyiv School of Economics who studies disinformation. “In fact, it’s one of the major tasks of modern disinformation—not to persuade people in something, yet to discourage them, turn them into passive, tired, exhausted mob.”
This emerging pattern represents what could be termed “strategic memes against public participation”—visual content designed specifically to confuse, sow doubt, and chill public engagement with political issues. The impact is evident in social media discussions where users express uncertainty rather than conviction about what’s real.
Fake visuals now accompany virtually every significant news event. AI-generated images purporting to show Jeffrey Epstein alive in Tel Aviv or with fabricated associates have garnered millions of views despite obvious flaws like gibberish Hebrew text on road signs.
Georgetown University researcher Renée DiResta, a globally recognized expert on disinformation, warns that detection has become increasingly difficult: “We have crossed the threshold of it being virtually impossible for people to tell just with the human eye whether something is real or fake.”
Research confirms the damaging impact. Studies from the University of Hong Kong and Vanderbilt University found that susceptibility to fake news increases when false headlines are paired with realistic-looking photos. Additional research from the University of New South Wales shows people consistently overestimate their ability to identify AI-generated faces.
The Trump administration has emerged as a significant player in this visual disinformation landscape. The White House published a doctored photo of activist Nekima Levy Armstrong during an immigration crackdown in Minnesota, showing her crying and with darkened skin as she was arrested at a demonstration. When questioned about the manipulation, White House spokesperson Abigail Jackson responded with mockery, posting a meme ridiculing fact-checkers.
Weeks later, the administration published an AI-generated TikTok video falsely depicting Team USA hockey star Brady Tkachuk making derogatory comments about Canadians, forcing the athlete to publicly deny the content.
“When it became clear that the U.S. government itself was doing this to own its domestic enemies, that was alarming,” DiResta notes. She distinguishes between obvious political propaganda and genuinely manipulative content that contributes to institutional distrust and information overload.
The most concerning outcome is what experts call “truth decay”—a condition where constant exposure to disinformation produces both cynicism and disengagement. “A high-volume, repetitive environment doesn’t need to persuade you of a specific lie,” explains Iarovyi. “It can persuade you that truth is inaccessible, so politics becomes vibes, identity, and tribe.”
Research from Harvard Kennedy School has shown that this information environment directly impacts civic participation, with distrustful citizens more likely to abstain from voting or support populist candidates. A 2014 University of Kent study demonstrated that exposure to conspiracy theories reduced participants’ intentions to engage in politics altogether.
The potential long-term consequences echo warnings from political theorists like Hannah Arendt about pathways to autocracy when citizens become sufficiently confused and disarmed by disinformation.
Countries with longer histories of battling state-backed disinformation campaigns, particularly Baltic nations, have developed more sophisticated resilience strategies. “They don’t treat disinformation as a temporary ‘media trend’ that will pass,” Iarovyi notes. “That mindset changes the work—it pushes you toward long-term capacity building and institutional routines, not just reactive debunks.”
As the United States grapples with increasingly realistic fake images—some promoted by government entities—the challenge extends beyond identifying individual falsehoods to maintaining a fundamental shared reality necessary for democratic functioning.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
I’m curious to learn more about the specific techniques and tools used to create these manipulated images. Understanding the technological capabilities behind AI-powered disinformation is key to developing effective countermeasures.
That’s a great point. Staying ahead of the latest advancements in AI and synthetic media generation will be crucial for policymakers and tech companies to develop robust detection and mitigation strategies.
The ability of AI to create such realistic but false images is truly alarming. We must invest in robust fact-checking and digital forensics capabilities to quickly identify and debunk these fabrications before they spread further.
Absolutely. Strengthening online platforms’ content moderation and empowering users to spot synthetic media will be crucial in the fight against disinformation.
Wow, this is really concerning. AI-generated disinformation is a growing threat that undermines trust in our institutions and democratic processes. We need to stay vigilant and fact-check claims, no matter how convincing they appear.
I agree, this is a serious issue. Proper media literacy and critical thinking skills are more important than ever to combat the spread of fake news and manipulated content.
This is a concerning trend that goes beyond just political scandals. AI-generated fakes could be used to spread disinformation about anything, from financial markets to public health. We need a comprehensive strategy to address this threat.
While the spread of AI-generated fakes is alarming, I’m hopeful that increased public awareness and investment in digital forensics can help us combat this threat to our democratic institutions. We must remain vigilant and committed to the truth.