On December 27, 2025, China’s Cyberspace Administration published draft rules for anthropomorphic AI interaction services, including emotional companion chatbots. Providers must clearly disclose when users are interacting with AI and issue reminders at least every two hours, with extra protections for minors and elderly users.
This article aggregates reporting from 4 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
China’s new draft rules on anthropomorphic AI are an unusually specific intervention: Beijing is zeroing in on AI systems that mimic human personalities and provide emotional companionship. By forcing providers to disclose that users are talking to a machine and requiring regular reminders and logout nudges, regulators are signaling that parasocial relationships with bots are now a matter of national concern, not just UX design. The heavy emphasis on minors and the elderly shows that risk framing is shifting from “content harms” to “attachment and dependency harms.”
Strategically, this is a shot across the bow for every Chinese lab building character AIs, from big platforms like Baidu and ByteDance to newer startups focused on AI girlfriends, therapists, and tutors. Compliance now becomes a product requirement: companies will need instrumentation to detect overuse, policies for crisis escalation, and guardrails to keep personalities aligned with “core socialist values.” That adds friction but also creates a moat for players who can operationalize safety at scale.
Globally, the move accelerates a broader trend: the more AI feels “like a person,” the more governments want to regulate it like a quasi‑social actor. Expect other jurisdictions to watch China’s experiment closely, particularly around mandatory AI disclosure intervals and duty-of-care expectations for emotionally immersive agents.