Listen to the article
Immigration agents’ fatal shooting of an intensive care nurse working with veterans has sparked the spread of AI-generated misinformation online, according to a Snopes investigation.
In the days following the January 24 death of 37-year-old Alex Pretti in Minneapolis, social media users began circulating an image purportedly showing the nurse assisting two disabled veterans with amputated legs during physical therapy. The image quickly spread across multiple platforms including Facebook, Instagram, Reddit, Threads, Bluesky and X.
Snopes has determined the image is fake, created using artificial intelligence technology. Multiple readers contacted the fact-checking organization questioning the authenticity of the photo, which showed Pretti helping two apparent double amputees using parallel bars during rehabilitation.
Digital forensics revealed several telltale signs of AI generation. Most notably, the American flag visible in the background displayed only 11 red and white stripes instead of the correct 13. Additionally, one of the men in the image appeared with only a single prosthetic leg, which could either be an intentional choice or a failure of the AI to properly render both prostheses.
Technical analysis using Google’s SynthID Detector tool confirmed the presence of a digital watermark indicating the image was created or manipulated using Google’s AI platforms. While some AI detection websites identified the image as artificially generated, others incorrectly classified it as authentic, highlighting the current limitations of such verification tools.
Pretti, who worked for the U.S. Department of Veterans Affairs (VA) as an intensive care nurse caring for critically ill veterans, was fatally shot by federal immigration agents on January 24, 2026. The circumstances surrounding his death have generated significant public attention.
Extensive searches across major search engines revealed no credible news coverage or information about Pretti assisting with physical therapy for amputee veterans as depicted in the fake image. If authentic, such imagery would likely have been covered by legitimate news organizations given the high-profile nature of the case.
While the parallel bars rehabilitation image has been conclusively debunked, some social media users have also shared a video from December 2024 showing Pretti reading a final salute for a veteran in a VA hospital. Though Snopes has not independently verified that video, multiple reputable news outlets have reported it as authentic footage of Pretti performing his professional duties.
The spread of this fabricated image reflects a growing trend of AI-generated content being used to shape narratives around tragic events and controversial incidents. Such misinformation can complicate public understanding of sensitive situations and potentially inflame tensions during periods of social unrest.
The Associated Press and Military.com have reported that Pretti worked as an ICU nurse for the Veterans Affairs system prior to his death. According to these sources, he was shot during a protest in Minneapolis, though the complete circumstances surrounding the incident remain under investigation.
This case highlights the increasing sophistication of AI-generated imagery and the challenges faced by the public, journalists, and fact-checkers in distinguishing authentic content from sophisticated forgeries, especially during fast-moving and emotionally charged events.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
The use of AI technology to generate false images is deeply concerning. It’s crucial that we remain vigilant and verify the authenticity of digital content, especially when it involves sensitive topics like the death of a healthcare worker. Kudos to Snopes for their thorough investigation.
It’s disheartening to see AI-generated misinformation being spread about this tragic incident. The fact-checking effort by Snopes is commendable, as it helps to maintain the integrity of information shared online. We must all be more discerning consumers of digital content.
I agree. The proliferation of AI-created media makes it increasingly challenging to distinguish fact from fiction. Fact-checking organizations play a vital role in combating the spread of misinformation and upholding the truth.
This case highlights the need for robust fact-checking and media literacy in the digital age. The AI-generated nature of the image is a concerning trend that undermines trust in online information. It’s important that we all strive to be more discerning consumers of digital content.
This is a distressing example of the dangers of AI-generated misinformation. The image’s fabricated nature, as uncovered by Snopes, is a concerning development that highlights the need for greater scrutiny of digital content. We must all do our part to combat the spread of false information.
This is a stark reminder of the importance of verifying information, especially when it comes to sensitive topics like the death of a healthcare worker. The AI-generated nature of the image is concerning and undermines the credibility of the content. We must be vigilant in our media consumption.
The image details like the incorrect American flag are clear signs of AI manipulation. It’s troubling to see this kind of fake content being circulated, as it can sow confusion and undermine trust. Kudos to Snopes for their investigation and debunking of this misleading information.
Absolutely. Fact-checking is crucial in an age of widespread misinformation. I’m glad Snopes was able to identify the telltale AI artifacts in the image. Spreading false information, even inadvertently, can have serious consequences.
This is a disturbing case of misinformation spreading online. I appreciate the fact-checking effort to determine the image is AI-generated and not real. It’s important to be vigilant about verifying the authenticity of digital content, especially when it involves sensitive issues like the loss of a healthcare worker.
The spread of this AI-generated misinformation is truly troubling. It’s a stark reminder that we must be cautious about the information we consume and share online, especially when it involves sensitive topics. Kudos to Snopes for their diligent fact-checking efforts.