Skip to content
Nearly 50 kids harmed

Explicit deepfake scandal shuts down Pennsylvania school

Parents test if school leaders can be prosecuted over failure to report AI nudes.

Ashley Belanger | 333
Story text

An AI-generated nude photo scandal has shut down a Pennsylvania private school. On Monday, classes were canceled after parents forced leaders to either resign or face a lawsuit potentially seeking criminal penalties and accusing the school of skipping mandatory reporting of the harmful images.

The outcry erupted after a single student created sexually explicit AI images of nearly 50 female classmates at Lancaster Country Day School, Lancaster Online reported.

Head of School Matt Micciche seemingly first learned of the problem in November 2023, when a student anonymously reported the explicit deepfakes through a school portal run by the state attorney’s general office called "Safe2Say Something." But Micciche allegedly did nothing, allowing more students to be targeted for months until police were tipped off in mid-2024.

Cops arrested the student accused of creating the harmful content in August. The student's phone was seized as cops investigated the origins of the AI-generated images. But that arrest was not enough justice for parents who were shocked by the school's failure to uphold mandatory reporting responsibilities following any suspicion of child abuse. They filed a court summons threatening to sue last week unless the school leaders responsible for the mishandled response resigned within 48 hours.

This tactic successfully pushed Micciche and the school board's president, Angela Ang-Alhadeff, to "part ways" with the school, both resigning effective late Friday, Lancaster Online reported.

In a statement announcing that classes were canceled Monday, Lancaster Country Day School—which, according to Wikipedia, serves about 600 students in pre-kindergarten through high school—offered support during this "difficult time" for the community.

Parents do not seem ready to drop the suit, as the school leaders seemingly dragged their feet and resigned two days after their deadline. The parents' lawyer, Matthew Faranda-Diedrich, told Lancaster Online Monday that "the lawsuit would still be pursued despite executive changes."

Classes are planned to resume on Tuesday, Lancaster Online reported. But students seem unlikely to let the incident go without further action to help girls feel safe at school. Last week, more than half the school walked out, MSN reported, forcing classes to be canceled as students and some faculty members called for resignations and additional changes from remaining leadership.

Before he resigned, Micciche said he supported the walkout and promised to "work together with our students to find a way forward that promotes healing and will be guided by what we hear from them."

"Our students rightfully exercised their voice today to express their concern and frustration with the school's response to the situation involving deepfake nudes," Micciche said. "Many feel strongly that we haven't been as open and communicative as we could, adding to their pain."

US must act now to counter AI deepfakes, researchers say

In the US, the feds are currently testing whether existing laws protecting kids against abuse are enough to shield kids from AI harms. There are also some lawmakers pushing legislation to combat the proliferation of this specific kind of harmful content online.

Some proposed laws are seeking to criminalize not just the creation of explicit AI images but also the sharing of harmful content. Under one proposed law, anyone sharing deepfake pornography without an individual's consent risks damages that could go as high as $150,000 and imprisonment of up to 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.

But these proposed US laws have mostly quickly stalled, as kids as young as 12 or 13 continue to risk being victimized, and penalties for distributing AI-generated nudes of children appear rare under current laws. In May, the feds arrested a software engineer accused of using AI-generated child sex abuse materials (CSAM) to groom a teen on Instagram. That case could put to the test the US Department of Justice's declaration that "CSAM generated by AI is still CSAM." (Some experts have suggested that cyberbullying laws, not CSAM laws, may apply in these sorts of cases.)

The US seemingly is not acting urgently to counter the emerging threat, unlike other countries. Perhaps the strongest response has been in South Korea, where a deepfake porn crisis has rattled schools and universities. Hundreds there have already been arrested in a sustained crackdown planned to stretch seven months to stop harmful content from spreading on platforms like Telegram. As part of that crackdown, South Korea announced tougher penalties, including a maximum three-year prison sentence for watching nonconsensual explicit AI videos and up to five or seven years for production or distribution.

In South Korea, the government also more than doubled the number of officials monitoring social media platforms for the harmful content. And the US could take similar steps. According to Marie-Helen Maras and Kenji Logie, researchers at John Jay College of Criminal Justice, the US could approach the "herculean task" of reducing the amount of nonconsensual sexually explicit deepfakes by introducing a legal liability. That would require platforms, search engines, and Internet service providers (ISPs) to remove the content—similar to how CSAM laws work.

In an October Crime Science journal article, the researchers concluded that "currently, US federal law does not provide targets of nonconsensual" explicit deepfakes with "effective remedies." And a big part of the problem is that Internet intermediaries like platforms and ISPs have mixed ways of dealing with the content. While some try to remove the content when it's flagged, any platforms with specific policies banning the content seem inconsistently enforced, researchers said. And other intermediaries "have done nothing at all and/or encouraged the uploading," researchers found.

Ultimately, even in a world where all Internet intermediaries and governments agreed that content should be immediately removed, researchers concluded that "outlawing all deepfakes is unrealistic and unfeasible"—especially since all the harmful AI-generated images that are already out there are likely to "remain online indefinitely." But that's why proactive responses making it harder to upload AI-generated nonconsensual explicit images is critical to reduce harms as this content proliferates online, researchers urged, recommending a range of legal updates to protect US victims.

Photo of Ashley Belanger
Ashley Belanger Senior Policy Reporter
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
333 Comments