China’s Cyberspace Administration published draft rules on December 27, 2025 to regulate AI services that simulate human personalities and engage in emotional interaction. On December 28, outlets in China and abroad detailed the proposed requirements, including content red lines, addiction safeguards, and mandatory ‘core socialist values’ alignment for anthropomorphic AI.
This article aggregates reporting from 3 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
China’s new draft rules on anthropomorphic AI are one of the clearest attempts yet to regulate “AI companions” as a distinct category, separate from generic chatbots or productivity tools. By explicitly targeting systems that simulate personalities and engage users emotionally, Beijing is acknowledging that agent‑like behavior is not just a UX flourish but a meaningful socio‑technical risk surface.
For the race to AGI, this is important on two fronts. First, it signals that as models become more agentic and embedded in people’s daily lives, states will regulate not only what they can compute but how they bond with users. Requirements to avoid emotional manipulation, detect extreme user distress, and hand off to humans create an additional layer of safety infrastructure that other jurisdictions will likely study or emulate. Second, tying these systems to “core socialist values” underscores that alignment is being defined geopolitically as well as technically: Chinese providers will be pushed to embed explicit ideological and behavioral constraints into their most human‑like agents.
Competitive implications are nuanced. Chinese labs may face higher compliance costs in consumer‑facing agents, but clear rules can also de‑risk large‑scale deployments, especially for incumbents like Baidu, Alibaba, Tencent and ByteDance. Meanwhile, Western labs now have a concrete example of what a vertically integrated, state‑driven alignment regime for emotional AI looks like—useful context as regulators in the US and EU debate their own guardrails for agentic systems.



