China’s Cyberspace Administration released draft rules on December 27, 2025 to regulate AI services that simulate human personalities and engage in emotional interaction. On December 28, outlets including Singapore’s Zaobao and Chinese state-linked sites published detailed explainers describing requirements such as addiction warnings, emotional risk monitoring, and stricter data use limits for training AI companions.
This article aggregates reporting from 5 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Beijing’s new draft rules for “anthropomorphic interactive AI services” are some of the most detailed attempts yet to regulate AI companions and emotionally rich agents. The CAC text and accompanying expert commentaries frame these systems as high‑risk: they explicitly call out emotional dependence, cognitive manipulation, and mental health harms, and propose lifecycle obligations for providers—from design to shutdown. This isn’t generic AI policy; it is targeted at chatbots, virtual idols, and AI partners that mimic human personalities and sustain long‑term user relationships. ([cac.gov.cn](https://www.cac.gov.cn/2025-12/28/c_1768662848000498.htm?utm_source=openai))
Strategically, this moves China toward a differentiated governance model for “AI with feelings.” Providers must warn against overuse, detect distress, escalate suicidal content to humans, and—crucially—are barred from using user interaction logs and sensitive personal data for model training without explicit consent. That last point cuts against the default data‑hungry posture of frontier labs, and if rigorously enforced, could slow unconstrained scaling of Chinese “AI girlfriend” and therapy bots. ([finance.people.com.cn](https://finance.people.com.cn/n1/2025/1228/c1004-40633722.html?utm_source=openai))
For the global race to AGI, this is a bellwether: as AI systems become more agentic and emotionally persuasive, governments will not only regulate model capability but also the *relationship layer*. China is effectively saying that emotionally immersive AI is closer to healthcare or gambling than to generic software—and should be treated with comparable safeguards.


