Listen to the article
Schools across America are grappling with a disturbing trend: students using artificial intelligence to transform innocent photos of classmates into sexually explicit deepfakes, leaving victims traumatized and administrators scrambling for solutions.
The issue came to national attention this fall when AI-generated nude images circulated through a Louisiana middle school. Two boys were eventually charged, but only after one victim was expelled for starting a fight with a boy she accused of creating fake images of her and her friends.
“While the ability to alter images has been available for decades, the rise of AI has made it easier for anyone to alter or create such images with little to no training or experience,” Lafourche Parish Sheriff Craig Webre noted in a public statement, urging parents to discuss this growing concern with their children.
The Louisiana case is believed to be the first prosecution under the state’s new law targeting deepfakes, according to Republican state Senator Patrick Connick, who authored the legislation. It represents part of a nationwide legal response, with at least half of U.S. states enacting legislation in 2025 addressing generative AI’s creation of fabricated images and sounds. Many of these laws specifically target simulated child sexual abuse material.
Similar incidents have led to prosecutions in Florida and Pennsylvania, while schools in California have expelled students involved in creating such content. In a particularly troubling case, a fifth-grade teacher in Texas was charged with using AI to create child pornography featuring his students.
The accessibility of deepfake technology has evolved dramatically in recent years. Sergio Alexander, a research associate at Texas Christian University who studies the issue, explained that what once required significant technical expertise has become alarmingly simple.
“Now, you can do it on an app, you can download it on social media, and you don’t have to have any technical expertise whatsoever,” Alexander said.
The statistics reflect this troubling shift. The National Center for Missing and Exploited Children reported that AI-generated child sexual abuse images submitted to its cyber tipline skyrocketed from 4,700 in 2023 to 440,000 in just the first six months of 2025—a nearly hundredfold increase.
Despite this rapid escalation, experts worry that educational institutions aren’t responding adequately. Sameer Hinduja, co-director of the Cyberbullying Research Center and professor at Florida Atlantic University’s School of Criminology and Criminal Justice, recommends that schools update their policies on AI-generated deepfakes and improve how they communicate these policies to students.
“Students don’t think that the staff, the educators are completely oblivious, which might make them feel like they can act with impunity,” Hinduja explained, adding that many parents incorrectly assume schools are addressing the issue when they aren’t. “We hear about the ostrich syndrome, just kind of burying their heads in the sand, hoping that this isn’t happening amongst their youth.”
The psychological impact of AI deepfakes differs significantly from traditional bullying. Rather than facing a nasty text or rumor, victims confront seemingly real images or videos that often go viral and repeatedly resurface, creating an ongoing cycle of trauma. Many victims develop depression and anxiety as a result.
“They literally shut down because it makes it feel like, you know, there’s no way they can even prove that this is not real—because it does look 100% real,” Alexander noted.
Experts encourage parents to initiate conversations about AI-manipulated content by casually asking their children if they’ve seen fake videos online. Laura Tierney, founder and CEO of The Social Institute, which educates people on responsible social media use, emphasizes that children need to know they can discuss these issues with parents without fear of punishment or losing access to their devices.
Tierney recommends using the acronym SHIELD as a response guide: Stop and don’t forward; Huddle with a trusted adult; Inform social media platforms; collect Evidence but don’t download anything; Limit social media access; and Direct victims to help.
“The fact that that acronym is six steps I think shows that this issue is really complicated,” she said.
As AI technology continues to advance, schools, parents, and lawmakers face the challenge of keeping pace with appropriate responses and preventative measures. Without coordinated efforts across these fronts, the problem of AI-generated deepfakes in schools threatens to expand further, leaving more students vulnerable to its devastating effects.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


6 Comments
While the technology behind deepfakes is fascinating, the malicious use of it to exploit and humiliate students is truly troubling. I hope schools can find effective ways to combat this growing problem.
The Louisiana case highlights how quickly this issue has escalated. With new AI tools making it so easy to create fake images, schools will need to stay vigilant and work closely with lawmakers to address this threat.
Cyberbullying is already a major issue, and deepfakes take it to a whole new level. Schools need to work closely with law enforcement and tech companies to identify and stop these abusive practices.
Agreed. Proactive measures like digital safety education and clear disciplinary policies are essential to protect vulnerable students.
This is a disturbing trend that schools will need to address proactively. The ease of creating AI-generated deepfakes puts students at real risk of harm and trauma. Stronger laws and education on digital safety are crucial.
Deepfake cyberbullying is a complex challenge with no easy solutions. Schools, families, and policymakers will all need to work together to protect students and hold perpetrators accountable.