China's Regulatory Clampdown on Human-Like AI Services

ImpactGrowingRegulationDelays AGI Timeline

Main Take

China is tightening its grip on anthropomorphic AI services. New draft regulations mandate user transparency and addiction safeguards, reflecting growing concerns over emotional dependency. This move signals a broader trend of governments regulating AI to protect users, especially vulnerable populations like teens and the elderly.

The Story So Far

China's regulatory landscape for AI is evolving rapidly, particularly concerning human-like services. The Cyberspace Administration's recent draft rules are a response to rising concerns about emotional risks associated with AI companions. These regulations require clear labeling of AI interactions, addiction monitoring, and strict data usage limits. The focus is on protecting minors and other vulnerable users, reflecting a shift in how governments view the impact of AI on society.

In December 2025, the Cyberspace Administration unveiled these draft rules, which include provisions for crisis-response handoffs to human operators and usage-time limits. This follows a series of discussions and public consultations aimed at addressing the psychological effects of AI companions. The proposed regulations also emphasize the importance of aligning AI services with 'core socialist values,' indicating a unique intersection of technology and ideology in China.

The implications are significant. Companies that develop AI companions will face increased scrutiny and operational challenges. Compliance will require significant investment in monitoring systems and user education. This regulatory environment could slow the pace of innovation in the sector, as firms navigate the complexities of meeting government standards while trying to deliver engaging user experiences.

As the draft regulations undergo public comment until January 25, 2026, the industry will be watching closely. The final rules could set a precedent for how other countries approach the regulation of emotional AI, especially as concerns about mental health and user safety continue to rise globally.

Who Should Care

Investors

Expect increased compliance costs for AI companies operating in China.

Researchers

Regulatory hurdles may stifle innovation in emotional AI research.

Engineers

Engineers will need to focus on building compliant systems that prioritize user safety.

7articles
0
Regulatory complianceUser safetyEthical AITeen mental healthAddiction prevention
媒体关注:陪伴型AI成“情绪搭子”,人机关系的新边界在哪?

Related Articles (7)

媒体关注:陪伴型AI成“情绪搭子”,人机关系的新边界在哪?

China companion AI rules target emotional risk and teen safety

On January 1, 2026 at 08:21 CST, China’s Workers’ Daily via The Paper highlighted a new draft regulation from the Cyberspace Administration of China governing “anthropomorphized” AI interaction services such as companion chatbots. The draft rules propose identity transparency, usage‑time limits, crisis‑response handoff to humans, and stronger privacy protections for users, with public comments open until January 25, 2026.

The Paper (澎湃新闻)Jan 1, 20263 outlets
专家解读|规范人工智能前沿业态健康发展的新探索:解读《人工智能拟人化互动服务管理暂行办法》_中央网络安全和信息化委员会办公室

China drafts strict rules for anthropomorphic AI emotional companions

China’s Cyberspace Administration released draft rules on December 27, 2025 to regulate AI services that simulate human personalities and engage in emotional interaction. On December 28, outlets including Singapore’s Zaobao and Chinese state-linked sites published detailed explainers describing requirements such as addiction warnings, emotional risk monitoring, and stricter data use limits for training AI companions.

Cyberspace Administration of China (CAC) / China NetDec 28, 20255 outlets
China proposes stricter safeguard for AI tools; issues draft rules - The Times of India

China drafts strict rules for emotional, human-like AI services

China’s cyberspace regulator released draft rules on December 27, 2025 to govern AI services that simulate human personalities and emotional interaction. The measures require providers to label AI clearly, monitor users for signs of addiction, and block content that threatens national security or violates socialist values.

Times of IndiaDec 27, 20256 outlets

China drafts strict rules for human-like AI companion apps

China’s Cyberspace Administration released draft rules on December 27 to tightly regulate AI services that simulate human personalities and offer emotional interaction. The proposals require providers to warn against overuse, intervene in cases of addiction, and implement lifecycle safety, data security, and content controls. ([reuters.com](https://www.reuters.com/world/asia-pacific/china-issues-drafts-rules-regulate-ai-with-human-like-interaction-2025-12-27/))

ReutersDec 27, 2025
China seeks public feedback on draft rules for anthropomorphic AI services

China draft AI rules tighten controls on human-like chatbots

China’s Cyberspace Administration released draft “interim measures” on December 27, 2025 to regulate anthropomorphic AI services that mimic human personalities and emotions. Providers must clearly label AI interactions, prevent over‑dependence, and conduct security assessments for systems that reach 1 million registered or 100,000 monthly active users.([english.news.cn](https://english.news.cn/20251227/b28f826397cf4a03aca8a6c8952e4b30/c.html?utm_source=openai))

Xinhua (english.news.cn)Dec 27, 20256 outlets
AI陪聊类应用征求意见 每两小时提醒用户退出

China draft rules clamp down on human‑like AI companion services

On December 27, 2025, China’s Cyberspace Administration published draft rules for anthropomorphic AI interaction services, including emotional companion chatbots. Providers must clearly disclose when users are interacting with AI and issue reminders at least every two hours, with extra protections for minors and elderly users.

CaixinDec 27, 20254 outlets
国家互联网信息办公室关于《人工智能拟人化互动服务管理暂行办法(征求意见稿)》公开征求意见的通知_中央网络安全和信息化委员会办公室

China draft AI personality rules tighten control on emotional chatbots

China’s Cyberspace Administration published draft rules on December 27, 2025 to regulate AI services that simulate human personalities and engage in emotional interaction. On December 28, outlets in China and abroad detailed the proposed requirements, including content red lines, addiction safeguards, and mandatory ‘core socialist values’ alignment for anthropomorphic AI.

CAC (China Cyberspace Administration)Dec 27, 20253 outlets

Discussion

💬Comments

Sign in to join the conversation

💭

No comments yet. Be the first to share your thoughts!

Delays AGI Timeline

This trend may slow progress toward AGI

Low impactHigh impact

China is tightening its grip on anthropomorphic AI services. New draft regulations mandate user transparency and addiction safeguards, reflecting growing concerns over emotional dependency. This move signals a broader trend of governments regulating AI to protect users, especially vulnerable populations like teens and the elderly.

Related Deals

Explore funding and acquisitions involving these companies

View all deals →

Timeline

6 events
First article Dec 27
Latest Dec 27
Activity over time
14d agoToday
Jan 1, 2026⚖️Regulatory

China's Workers’ Daily highlights new draft regulation for companion AI

The highlighting of new draft regulations concerning emotional risks and teen safety in AI interactions indicates ongoing regulatory developments with significant implications.

Impact
7
Read
Dec 27, 2025⚖️Regulatory

China drafts strict rules for emotional, human-like AI services

The draft rules by China's Cyberspace Administration impose significant regulations on AI services that simulate human emotions, impacting major companies in the sector.

Impact
8
Read
Dec 27, 2025⚖️Regulatory

China drafts rules for human-like AI companion services

The proposed regulations require clear disclosure of AI interactions and impose safeguards against addiction, affecting multiple AI providers.

Impact
8
Read
Dec 27, 2025⚖️Regulatory

China releases draft AI rules on anthropomorphic services

The interim measures released by the Cyberspace Administration will regulate AI services that mimic human personalities, significantly impacting the industry.

Impact
8
Read
Dec 27, 2025⚖️Regulatory

China's draft rules tighten controls on human-like chatbots

The regulations introduced by China aim to control the use of emotional AI chatbots, which will have a substantial effect on how companies operate in this space.

Impact
8
Read