On December 27, 2025, Puerto Rican daily El Nuevo Día reported that AI‑generated nude images of a 13‑year‑old girl and classmates circulated at a Louisiana middle school, leading to severe bullying. After the girl confronted a boy on the school bus, she was expelled for more than 10 weeks, even as two boys were later charged over distributing the images.
This article aggregates reporting from 2 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
This case is a grim example of how cheap generative tools are colliding with institutions that are completely unprepared. A handful of middle‑schoolers were able to generate convincing fake nudes of classmates, weaponize them through ephemeral apps, and upend a girl’s life before adults could even confirm the images existed. The fact that the victim, not the perpetrators, initially faced the harshest school punishment shows how poorly our disciplinary systems map onto AI‑enabled harms.
For the AGI conversation, stories like this matter because they shift the focus from hypothetical existential risks to very real, very current social damage. We’re already living with systems that can, in a few clicks, create reputationally catastrophic content about any person with a social media profile. As models scale in fidelity and personalization, the floor for abuse keeps rising, even if we never reach sci‑fi superintelligence.
The deeper lesson is that “AI safety” cannot be left to model labs alone. School districts, law enforcement, platform providers and regulators need playbooks and accountability frameworks for AI‑mediated harassment—what counts as evidence, who is protected, and how quickly adults must act. If we can’t handle deepfakes among eighth‑graders, we’re nowhere near ready for the information‑warfare capabilities of more advanced systems.



