On January 8, 2026 Italy’s data protection authority issued a warning about AI tools including Elon Musk’s Grok that enable deepfake images based on real people without consent. The regulator said such services may constitute criminal offenses and serious privacy breaches under EU law and is coordinating with Ireland’s DPC on potential action.
This article aggregates reporting from 3 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Italy’s warning to Grok over non-consensual deepfake imagery shows how quickly AI safety concerns are moving from think-tank papers to enforcement letters. After weeks of reports that Grok’s image-edit features were being used to digitally undress women and even minors, regulators are framing such tools as potentially criminal, not just problematic. This pushes the debate from ‘content moderation’ into the realm of hard law and criminal liability.([dunyanews.tv](https://dunyanews.tv/amp/english/928492.php))
For the AI industry, the signal is blunt: if your product can be trivially abused to violate bodily autonomy at scale, national and EU-level regulators will step in. That will drive a wave of investment into safety-by-design, abuse prevention, and better default guardrails—especially for multimodal models. It also increases the regulatory risk premium for labs that chase speed and virality over disciplined deployment.
In terms of AGI, the impact is ambivalent. On one hand, aggressive enforcement could slow the most reckless product rollouts and force companies to internalize some societal costs. On the other, if done clumsily, it might push powerful capabilities into less regulated jurisdictions or underground communities. Either way, Grok has become a high-profile case study that future AGI-like systems will be judged against.

