In the early hours of May 7, EU negotiators agreed to amend the AI Act to outlaw systems that generate non‑consensual sexual deepfakes or child sexual abuse imagery. The political deal also postpones enforcement of some high‑risk AI obligations and watermarking requirements to late 2026–2028.
This article aggregates reporting from 6 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
The EU’s move to explicitly ban sexual deepfakes and AI‑generated child abuse imagery shows how quickly lawmakers are tightening guardrails around some of the most toxic uses of generative models. This isn’t about frontier‑scale model capability so much as trust in the broader AI ecosystem: if mainstream users feel unprotected against reputational or privacy harms, political pressure for far more restrictive rules will build.
What matters for the race to AGI is the signal this sends about enforcement capacity and regulatory agility. The AI Act was only finalized a few years ago, and it’s already being amended in response to real‑world harms from tools like Grok’s nudification feature. That suggests Europe is willing to iterate on rules as model behavior evolves, rather than letting a static framework age badly. For global labs, it raises the bar on safety tooling, content filters, and abuse monitoring they’ll need to ship by default, especially for open or API‑based systems. The more compliance work becomes standardised, the easier it will be to reuse those controls for more powerful future models.


