China’s Cyberspace Administration released draft “interim measures” on December 27, 2025 to regulate anthropomorphic AI services that mimic human personalities and emotions. Providers must clearly label AI interactions, prevent over‑dependence, and conduct security assessments for systems that reach 1 million registered or 100,000 monthly active users.
This article aggregates reporting from 6 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
China’s new draft rules are one of the clearest blueprints yet for how governments may treat human‑like, emotionally engaging AI as a distinct, higher‑risk category. Instead of focusing on training data or model size, Beijing is drawing a regulatory perimeter around systems that simulate personality and form long‑term parasocial relationships—companions, tutors, therapists, influencers. That reflects real anxiety about psychological manipulation and addiction, but it also effectively defines a product class that regulators everywhere can now copy or adapt.([english.news.cn](https://english.news.cn/20251227/b28f826397cf4a03aca8a6c8952e4b30/c.html?utm_source=openai))
Strategically, this is a shot across the bow for both Chinese giants and any foreign player hoping to operate intimate AI agents in the world’s second‑largest economy. Providers will need explicit AI labelling, periodic "are you still OK?" checks, security assessments and political value alignment with “core socialist values,” plus reporting obligations once they cross 1 million users. That raises compliance costs and makes it harder for smaller players or open‑source projects to compete with well‑capitalized incumbents who can afford governance teams.
For the broader race to AGI, the signal is that frontier capability and frontier intimacy will not be allowed to move in lockstep. China is still all‑in on AI as an industrial and strategic engine, but it’s carving human‑like agents into a tightly supervised lane. Expect other jurisdictions to borrow these ideas as they grapple with AI companions, synthetic influencers and agentic assistants.


