On May 5, 2026, reporting from US and Indian outlets said the Trump administration is drafting an AI safety law that would require powerful models to undergo government vetting before public release. The discussions were reportedly accelerated by Anthropic’s Mythos system, which internal tests showed could autonomously discover large numbers of software vulnerabilities.
This article aggregates reporting from 3 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
This story is about the political system trying to catch up with the technical reality Anthropic just demonstrated with Mythos. A model that can autonomously hunt thousands of zero‑days flips the risk calculus for governments; even if it never ships publicly, it shows what’s now feasible in closed labs. The reported Trump‑era pivot—from tearing up Biden’s AI executive order to quietly rebuilding a stricter pre‑release review regime—reflects that shock.
For the race to AGI, this is an early sketch of how ‘frontier licensing’ might emerge in practice. Requiring companies like Google, OpenAI and Anthropic to submit models for security vetting before launch doesn’t stop them from training larger systems, but it can slow and shape deployment, especially for offensive‑capable agents. The process details—who evaluates, what thresholds trigger mitigation, whether open‑source models are covered—will matter more than the headlines.
Competitively, an approval regime tends to favor big, well‑lawyered labs with deep compliance budgets. If implemented poorly, it could entrench today’s leaders and marginalize smaller or open players. If done well, it could standardize red‑teaming, incident reporting and kill‑switch expectations across the stack. Either way, it’s a sign that frontier AI is now squarely in the national‑security policy domain, not just a product question.

