On May 1, 2026, the U.S. Department of Defense announced classified AI agreements with seven firms — SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft and Amazon Web Services — to deploy their models on top‑secret military networks. Anthropic was excluded after being labeled a supply‑chain risk earlier in 2026 over limits it placed on military use of its Claude and Mythos models.
This article aggregates reporting from 6 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
This is one of the clearest signals yet that frontier AI has moved from experimental pilot to core infrastructure for U.S. national security. By wiring models from OpenAI, Google, Microsoft, Nvidia, AWS, SpaceX and Reflection directly into impact‑level 6 and 7 networks, the Pentagon is effectively standardizing on commercial frontier systems for mission‑planning, targeting and classified analysis. Anthropic’s exclusion over its refusal to allow fully unconstrained ‘any lawful use’ is just as important: it formalizes a fork between safety‑first labs and those willing to align tightly with military doctrine.([thedefensepost.com](https://thedefensepost.com/2026/05/04/pentagon-snubs-anthropic-ai/amp/))
For the race to AGI, this locks in huge, durable demand for compute, models and agent platforms that can operate inside sensitive government environments. Firms that win this business gain not just revenue, but privileged access to real, complex data and edge cases that will sharpen their systems faster than civilian workloads alone. It also creates competitive pressure on Anthropic and other safety‑constrained players: maintain strict use policies and risk being sidelined from the largest single AI buyer on earth, or loosen guardrails to stay in the game.


