On January 11, 2026, The European Conservative reported that UK Prime Minister Keir Starmer has threatened to ban Elon Musk’s X in Britain if it does not remove Grok’s generative AI tools that can create non‑consensual pornographic deepfakes. The article also notes the government’s willingness to use Online Safety Act powers to mandate client‑side scanning of encrypted messages, with Ofcom expected to outline next steps in coming months.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
The UK is emerging as one of the more assertive democracies in grappling with generative AI’s role in sexual abuse imagery and encrypted communications. Threatening a full ban on X over Grok’s deepfake capabilities moves the debate from abstract harms to specific product levers: which AI features are socially tolerable, and at what point do they justify restricting an entire platform. This is not about AGI per se, but it telegraphs how governments might respond if future, more capable systems make it even easier to generate targeted, high‑fidelity synthetic harm.
The second front—client‑side scanning of encrypted messages—goes to the heart of trust in AI‑infused infrastructure. If AI systems are scanning everyone’s private messages “for safety” before encryption, that’s a qualitatively different surveillance substrate feeding into machine‑learning models. In the longer run, these fights could shape what data is available to train and evaluate powerful models, and whether citizens are willing to use AI‑enhanced services at all. They also risk fragmenting the internet if some platforms or features must be substantially altered or disabled country‑by‑country to comply.


