
Japan’s National Police Agency reported on December 18, 2025 that more than half of sexual deepfake cases involving victims under 18 were created by classmates or other students at the same schools. The agency said it will start distributing warning materials and using school lectures to discourage teens from making obscene AI‑generated images.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
This report is a stark reminder that generative AI harm is no longer abstract or confined to fringe corners of the web. When more than half of reported sexual deepfakes of minors in Japan are being made by classmates, we’re looking at a normalization of AI‑assisted abuse at the peer level. That fundamentally changes the threat model: schools, not just platforms and law enforcement, are now on the front line of AI misuse.
For the race to AGI, this kind of data will drive regulatory and cultural backlash if the industry doesn’t get ahead of safety. As models get better at photorealism and video, the barrier to generating convincing non‑consensual content will keep dropping. That pushes demand for robust provenance systems, watermarking, and detection models that can be deployed at scale in messaging apps and school IT systems. It also strengthens the case for age‑appropriate design codes that explicitly address AI features.
The competitive implication is that foundational model providers and major consumer platforms will be judged not only on capability, but on how effectively they can constrain and monitor misuse—especially by minors. Vendors that can offer built‑in safeguards, auditing hooks and effective detection APIs will be in a much stronger position as policymakers respond to this kind of evidence.



