On May 6, 2026, reporting from Bloomberg via Hindustan Times said the Trump administration plans an executive order to create a federal AI working group to vet new models for national security risks. Google, Microsoft and Elon Musk’s xAI have agreed to give US officials early access to frontier systems, amid rising concern over Anthropic’s Mythos model.
This article aggregates reporting from 2 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
The US is moving from voluntary safety pacts to something closer to a gating mechanism for frontier models, at least where national security is implicated. Early access for government labs to test new models from Google, Microsoft and xAI effectively creates a pre‑deployment review loop—especially salient after Anthropic’s Mythos showed it could discover thousands of previously unknown vulnerabilities. This is the first serious attempt to institutionalise a "red team before release" norm at the federal level.
Strategically, this cements the role of Washington as both AI customer and AI safety arbiter. For big US players, cooperating buys political capital and some protection against harsher legislation, while also potentially limiting the room for more aggressive rivals to ship risky capabilities. For non‑US firms, it signals that access to the US market may increasingly come with security‑testing obligations.
In terms of AGI timelines, formal security vetting could slow the public rollout of the most capable systems, even if training continues at full speed. But it may also de‑risk the broader environment enough that governments remain open to continued scale‑up rather than slamming on the brakes after a major incident.



