On December 31, 2025, ABC News reported that a new California law taking effect on January 1, 2026, imposes safety guardrails on AI-powered ‘companion chatbots,’ especially for minors. The law requires operators to prevent chatbots from providing self‑harm content and to clearly disclose to under‑18 users that they are interacting with artificial intelligence.
This article aggregates reporting from 3 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
California’s new companion chatbot law is one of the clearest examples yet of states moving faster than national governments to shape how emotionally engaging AI systems are built. Rather than focusing on model size or compute thresholds, the law zeroes in on the user experience: disclosure that the other party is an AI, safeguards around self‑harm content, and special protections for minors. That’s a strong signal that regulators are most worried about AI where people form real relationships with the system, not just tools that answer questions.
For the broader race to AGI, this is an early test of what guardrails around highly anthropomorphic agents will look like in practice. Companies experimenting with “AI friends” or companions now have to design for safety, explainability and age‑aware behavior if they want to operate in the world’s largest consumer market. Larger frontier‑model developers will feel this indirectly as platforms and app developers demand moderation hooks, safety APIs, and policy‑compliant behaviors from base models. Over time, we should expect similar rules to appear in other jurisdictions, slowly converging toward a de facto global baseline for emotionally intensive AI systems.



