China is tightening its grip on anthropomorphic AI services. New draft regulations mandate user transparency and addiction safeguards, reflecting growing concerns over emotional dependency. This move signals a broader trend of governments regulating AI to protect users, especially vulnerable populations like teens and the elderly.
China's regulatory landscape for AI is evolving rapidly, particularly concerning human-like services. The Cyberspace Administration's recent draft rules are a response to rising concerns about emotional risks associated with AI companions. These regulations require clear labeling of AI interactions, addiction monitoring, and strict data usage limits. The focus is on protecting minors and other vulnerable users, reflecting a shift in how governments view the impact of AI on society.
In December 2025, the Cyberspace Administration unveiled these draft rules, which include provisions for crisis-response handoffs to human operators and usage-time limits. This follows a series of discussions and public consultations aimed at addressing the psychological effects of AI companions. The proposed regulations also emphasize the importance of aligning AI services with 'core socialist values,' indicating a unique intersection of technology and ideology in China.
The implications are significant. Companies that develop AI companions will face increased scrutiny and operational challenges. Compliance will require significant investment in monitoring systems and user education. This regulatory environment could slow the pace of innovation in the sector, as firms navigate the complexities of meeting government standards while trying to deliver engaging user experiences.
As the draft regulations undergo public comment until January 25, 2026, the industry will be watching closely. The final rules could set a precedent for how other countries approach the regulation of emotional AI, especially as concerns about mental health and user safety continue to rise globally.
Expect increased compliance costs for AI companies operating in China.
Regulatory hurdles may stifle innovation in emotional AI research.
Engineers will need to focus on building compliant systems that prioritize user safety.


On January 1, 2026 at 08:21 CST, China’s Workers’ Daily via The Paper highlighted a new draft regulation from the Cyberspace Administration of China governing “anthropomorphized” AI interaction services such as companion chatbots. The draft rules propose identity transparency, usage‑time limits, crisis‑response handoff to humans, and stronger privacy protections for users, with public comments open until January 25, 2026.

China’s Cyberspace Administration released draft rules on December 27, 2025 to regulate AI services that simulate human personalities and engage in emotional interaction. On December 28, outlets including Singapore’s Zaobao and Chinese state-linked sites published detailed explainers describing requirements such as addiction warnings, emotional risk monitoring, and stricter data use limits for training AI companions.
China’s cyberspace regulator released draft rules on December 27, 2025 to govern AI services that simulate human personalities and emotional interaction. The measures require providers to label AI clearly, monitor users for signs of addiction, and block content that threatens national security or violates socialist values.
China’s Cyberspace Administration released draft rules on December 27 to tightly regulate AI services that simulate human personalities and offer emotional interaction. The proposals require providers to warn against overuse, intervene in cases of addiction, and implement lifecycle safety, data security, and content controls. ([reuters.com](https://www.reuters.com/world/asia-pacific/china-issues-drafts-rules-regulate-ai-with-human-like-interaction-2025-12-27/))

China’s Cyberspace Administration released draft “interim measures” on December 27, 2025 to regulate anthropomorphic AI services that mimic human personalities and emotions. Providers must clearly label AI interactions, prevent over‑dependence, and conduct security assessments for systems that reach 1 million registered or 100,000 monthly active users.([english.news.cn](https://english.news.cn/20251227/b28f826397cf4a03aca8a6c8952e4b30/c.html?utm_source=openai))

On December 27, 2025, China’s Cyberspace Administration published draft rules for anthropomorphic AI interaction services, including emotional companion chatbots. Providers must clearly disclose when users are interacting with AI and issue reminders at least every two hours, with extra protections for minors and elderly users.

China’s Cyberspace Administration published draft rules on December 27, 2025 to regulate AI services that simulate human personalities and engage in emotional interaction. On December 28, outlets in China and abroad detailed the proposed requirements, including content red lines, addiction safeguards, and mandatory ‘core socialist values’ alignment for anthropomorphic AI.
This trend may slow progress toward AGI
China is tightening its grip on anthropomorphic AI services. New draft regulations mandate user transparency and addiction safeguards, reflecting growing concerns over emotional dependency. This move signals a broader trend of governments regulating AI to protect users, especially vulnerable populations like teens and the elderly.
The highlighting of new draft regulations concerning emotional risks and teen safety in AI interactions indicates ongoing regulatory developments with significant implications.
The draft rules by China's Cyberspace Administration impose significant regulations on AI services that simulate human emotions, impacting major companies in the sector.
The proposed regulations require clear disclosure of AI interactions and impose safeguards against addiction, affecting multiple AI providers.
The interim measures released by the Cyberspace Administration will regulate AI services that mimic human personalities, significantly impacting the industry.
The regulations introduced by China aim to control the use of emotional AI chatbots, which will have a substantial effect on how companies operate in this space.