Listen to the article
Social Media War Content Raises Concerns About AI Fakes and Mental Health Impact
Social media platforms have become flooded with images, videos, and stories about the war in Iran, raising dual concerns about content authenticity and psychological impact. Experts warn that distinguishing between genuine footage and artificial intelligence-generated content has become increasingly difficult for average users, creating a perfect storm of misinformation and anxiety.
At the Milwaukee School of Engineering, Professor Derek Riley, who directs the computer science program and specializes in artificial intelligence, regularly identifies telltale signs of fabricated content. While analyzing an AI-generated video purportedly showing a bombing in Iran, Riley pointed out subtle inconsistencies.
“I’m seeing the building move too quickly,” Riley noted, highlighting one of many indicators that trained professionals use to spot fakes. However, he acknowledged that most social media users lack this specialized knowledge.
“I think we basically can’t trust that anything we see, videos, images, unless it’s real life,” Riley cautioned. “I don’t think we can trust that it actually happened.”
This pervasive uncertainty adds another dimension to the already significant psychological toll of consuming war-related content. Mental health professionals at Rogers Behavioral Health in West Allis are observing increased cases of what psychologists term “secondary trauma” – emotional distress resulting from exposure to others’ suffering, even from a distance.
Heather Jones, Chief Clinical Officer at Rogers, explained the psychological mechanism behind this phenomenon: “We feel kind of powerless and helpless in a lot of ways. Sometimes things are happening in our communities, sometimes they’re happening overseas, and there isn’t anything, or there’s maybe a feeling that there’s nothing we can do about it.”
According to Jones, research indicates that even brief exposure to threatening news content – regardless of its authenticity – can trigger measurable increases in anxiety and depression. Just 15 minutes of viewing disturbing content can elevate stress hormones and negatively impact mental wellbeing.
“What is different about today is that threats are available to us 24 hours a day, seven days a week,” Jones said, highlighting how the constant accessibility of social media has fundamentally changed our relationship with global events.
The combination of potentially fabricated content with real tragedy creates a uniquely modern problem. Users may experience genuine emotional reactions to events that never occurred, while simultaneously becoming desensitized to actual suffering. This blurring of reality threatens both our information ecosystem and collective mental health.
Mental health professionals recommend practical steps to mitigate these effects. Jones suggests implementing digital boundaries, such as “putting your phone down 30 minutes before bed, reading or journaling. Writing down five things that you’re grateful for.”
From a media literacy perspective, Riley emphasizes critical thinking as the best defense against misinformation. “I think you have to look at what you’re seeing and think for yourself: ‘Is this real? Do I trust it? What’s the source?’ You have to ask all those questions.”
Social media companies have implemented reporting systems for AI-generated content that violates their policies, though critics argue these measures remain insufficient given the volume and sophistication of today’s fake content. Riley encourages users to report harmful or deceptive AI content to platform moderators.
As AI-generated content becomes increasingly sophisticated, distinguishing truth from fiction will require both individual vigilance and institutional safeguards. Meanwhile, mental health experts continue to emphasize the importance of balanced media consumption during times of global conflict.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


7 Comments
The warning from Professor Riley is a sobering reminder of the dangers posed by AI-generated misinformation. As these technologies become more advanced, the potential for harm only increases. Vigilance and critical thinking will be key to navigating this landscape.
This is a complex problem with no easy answers. While we should be concerned about the spread of AI fakes, we also need to consider the mental health impacts of this climate of uncertainty and mistrust. Addressing both sides of this issue will be crucial.
I’m curious to see what solutions and safeguards might be developed to address this challenge. Clearly, more needs to be done to empower users and combat the malicious use of AI in the spread of disinformation. This is an area ripe for innovation and collaboration.
I agree, the inability to trust what we see online is deeply troubling. This issue goes beyond the Iran conflict and speaks to broader challenges around the manipulation of digital media. We must find solutions to safeguard truth and transparency.
This is really concerning. The proliferation of AI-generated content about the Iran conflict could have serious consequences for mental health and the spread of misinformation. We need more robust ways to verify the authenticity of online content.
The proliferation of AI-generated content is a worrying trend that extends far beyond the current Iran conflict. This highlights the need for greater media literacy and critical thinking skills, both for the general public and for those working in the media industry.
It’s worrying that even experts have difficulty distinguishing real footage from AI fakes. This highlights the urgent need for better tools and education to help the public assess the credibility of online information, especially during times of crisis.