Regulation
Cyberspace Administration of China (CAC) / China Net
Cyberspace Administration of China (CAC) / China Net
Xinhua (via People’s Daily Finance)
Zaobao (联合早报)
+1
5 outlets
Sunday, December 28, 2025

China drafts strict rules for anthropomorphic AI emotional companions

Source: Cyberspace Administration of China (CAC) / China Net
Read original

TL;DR

AI-Summarizedfrom 5 sources

China’s Cyberspace Administration released draft rules on December 27, 2025 to regulate AI services that simulate human personalities and engage in emotional interaction. On December 28, outlets including Singapore’s Zaobao and Chinese state-linked sites published detailed explainers describing requirements such as addiction warnings, emotional risk monitoring, and stricter data use limits for training AI companions.

About this summary

This article aggregates reporting from 5 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.

5 sources covering this story

Race to AGI Analysis

Beijing’s new draft rules for “anthropomorphic interactive AI services” are some of the most detailed attempts yet to regulate AI companions and emotionally rich agents. The CAC text and accompanying expert commentaries frame these systems as high‑risk: they explicitly call out emotional dependence, cognitive manipulation, and mental health harms, and propose lifecycle obligations for providers—from design to shutdown. This isn’t generic AI policy; it is targeted at chatbots, virtual idols, and AI partners that mimic human personalities and sustain long‑term user relationships. ([cac.gov.cn](https://www.cac.gov.cn/2025-12/28/c_1768662848000498.htm?utm_source=openai))

Strategically, this moves China toward a differentiated governance model for “AI with feelings.” Providers must warn against overuse, detect distress, escalate suicidal content to humans, and—crucially—are barred from using user interaction logs and sensitive personal data for model training without explicit consent. That last point cuts against the default data‑hungry posture of frontier labs, and if rigorously enforced, could slow unconstrained scaling of Chinese “AI girlfriend” and therapy bots. ([finance.people.com.cn](https://finance.people.com.cn/n1/2025/1228/c1004-40633722.html?utm_source=openai))

For the global race to AGI, this is a bellwether: as AI systems become more agentic and emotionally persuasive, governments will not only regulate model capability but also the *relationship layer*. China is effectively saying that emotionally immersive AI is closer to healthcare or gambling than to generic software—and should be treated with comparable safeguards.

May delay AGI timeline

Who Should Care

InvestorsResearchersEngineersPolicymakers

Coverage Sources

Cyberspace Administration of China (CAC) / China Net
Cyberspace Administration of China (CAC) / China Net
Xinhua (via People’s Daily Finance)
Zaobao (联合早报)
Reuters
Cyberspace Administration of China (CAC) / China Net
Cyberspace Administration of China (CAC) / China NetZH
Read
Cyberspace Administration of China (CAC) / China Net
Cyberspace Administration of China (CAC) / China NetZH
Read
Xinhua (via People’s Daily Finance)
Xinhua (via People’s Daily Finance)ZH
Read
Zaobao (联合早报)
Zaobao (联合早报)ZH
Read
Reuters
Reuters
Read