On Feb. 6, 2026, Rep. Jay Obernolte (R‑Calif.) said a proposed decade‑long moratorium on state AI laws was intended as a messaging tool, not a permanent fix. He argued Congress should instead pass a nationwide AI framework that sets preemptive guardrails while still leaving room for state innovation.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Obernolte’s comments are an important signal that serious US policymakers see blanket AI moratoria as a dead end and are pivoting back toward comprehensive, risk‑based regulation. Rather than freezing state experimentation for a decade, he is arguing for a federal framework that clearly delineates which AI risks belong at the national level and which can be left to states.([route-fifty.com](https://www.route-fifty.com/artificial-intelligence/2026/02/ai-moratorium-was-never-long-term-solution-lawmaker-says/411247/)) That matters because the US has been drifting toward a patchwork of state rules on biometrics, hiring algorithms and model transparency, a trajectory that could fragment compliance burdens for both frontier labs and downstream adopters.
For the race to AGI, a coherent federal regime is a double‑edged sword. On one hand, uniform rules reduce regulatory uncertainty and lower the cost of scaling advanced systems across all 50 states. On the other, a strong federal standard could impose binding safety, documentation and auditing requirements that slow down the most aggressive deployment plans, particularly for high‑risk applications like hiring, lending and critical infrastructure. The deeper takeaway is that the political system is moving past symbolic gestures and into the hard work of designing durable governance for increasingly autonomous systems—a process that will shape how quickly, and under what constraints, AGI‑class capabilities can be rolled out in the US.

