China’s cyberspace regulator released draft rules on December 27, 2025 to govern AI services that simulate human personalities and emotional interaction. The measures require providers to label AI clearly, monitor users for signs of addiction, and block content that threatens national security or violates socialist values.
This article aggregates reporting from 6 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Beijing’s new draft rules go after a very specific frontier: AI systems that look, talk, and feel like people. The Cyberspace Administration of China wants providers of anthropomorphic, emotionally interactive AI to label bots clearly, track users’ emotional states and dependency, and intervene when overuse or addiction is detected — all while banning politically and morally disfavored content.([finance.sina.com.cn](https://finance.sina.com.cn/jjxw/2025-12-28/doc-inhehmtz9615866.shtml)) This is less about generic chatbots and more about AI companions, therapists, influencers, and virtual idols.
Strategically, China is signaling that the next regulatory frontier is affective AI: systems that can build parasocial relationships and subtly steer behavior. Requirements to monitor emotions, throttle engagement, and align outputs with “core socialist values” will push Chinese labs to bake safety, content control, and usage analytics deep into model-serving stacks.([finance.sina.com.cn](https://finance.sina.com.cn/jjxw/2025-12-28/doc-inhehmtz9615866.shtml)) At the same time, mandatory lifecycle safety management and algorithm review increase fixed compliance costs, favoring large incumbents over smaller startups.
For the global race to AGI, this is another data point that high-capability, high-intimacy models will face bespoke regulation, not just generic AI rules. Western policymakers are already watching Chinese experiments closely; if these rules reduce harms without killing innovation, expect similar obligations around disclosure, addiction mitigation, and emotional manipulation to show up in EU and US rulebooks.



