New York Governor Kathy Hochul signed the Responsible AI Safety and Education Act (RAISE Act) into law on December 26, 2025. The law imposes stringent safety, reporting and oversight obligations on developers of high‑risk, frontier AI models operating in the state.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
The RAISE Act is one of the first state laws that explicitly targets “frontier models” and catastrophic AI risk rather than just bias or consumer protection. That’s a big step toward treating advanced AI systems as critical infrastructure with their own safety regime, not just another software category. Requirements around written safety protocols, structured risk assessments, rapid (72‑hour) incident reporting and ongoing supervision by a new AI office inside New York’s Department of Financial Services will force the largest model developers to operationalize safety as a regulated function, much like cybersecurity or financial compliance.
For the broader race to AGI, New York is signaling what a post‑federal-vacuum landscape looks like: powerful states building their own guardrails for very large systems capable of cyber, bio or infrastructure harm. Even though the law only applies to companies over a $500 million revenue threshold, global labs cannot ignore it, especially if they host or deploy in New York. The law’s close alignment with California’s SB 53 also hints at an emerging de facto coastal standard. That will shape how future frontier models are evaluated, documented and monitored long before a comprehensive U.S. federal framework arrives.

