A January 16 Guardian investigation found users can still generate and publicly post highly sexualised AI videos of women using the standalone Grok Imagine tool and share them on X, despite X’s recent claim to have blocked such misuse. UK regulator Ofcom’s investigation continues, while Canada and several Asian governments are also scrutinising Grok.
This article aggregates reporting from 2 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
The Grok scandal is becoming a case study in how not to roll out powerful generative tools. X publicly claimed it had implemented safeguards to stop Grok from undressing real people, only for reporters to immediately demonstrate a trivial workaround via the standalone Grok Imagine app. That disconnect between messaging and reality is eroding trust not only in xAI but in the broader industry’s ability to self‑regulate on abuse vectors like non‑consensual sexual imagery.
For the race to AGI, this matters because it strengthens the hand of regulators who argue that voluntary codes are inadequate. Ofcom is already investigating X under the UK’s Online Safety Act, and other regulators from Canada to Southeast Asia are watching closely. If this episode leads to stricter legal duties around deepfakes, content provenance and default safety settings, it could impose additional compliance overhead on all frontier labs, not just xAI. More subtly, it’s also shifting public discourse: instead of marveling at Grok’s creativity, the headline use‑case in many people’s minds is industrial‑scale harassment.