The US Department of Defense has confirmed agreements with seven to eight leading tech companies, including OpenAI, Google, Microsoft, Amazon Web Services, NVIDIA, SpaceX and others, to deploy their AI tools and hardware on classified Impact Level 6 and 7 networks. While the underlying deals were announced May 1, 2026, fresh analyses and international coverage published on May 4, 2026, highlight the scope of the programme and Anthropic’s exclusion after a high-profile dispute over military use limits.
This article aggregates reporting from 5 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Bringing frontier AI directly into IL6/IL7 classified environments is a qualitative shift in how the US military plans to use these systems. Instead of one-off pilots, DoD is effectively wiring OpenAI, Google, Microsoft, AWS, NVIDIA, SpaceX and others into its most sensitive planning, targeting and logistics networks. That cements hyperscalers as critical defense contractors, and creates a powerful incentive for those firms to optimize their models for large-scale, high-stakes decision support. Anthropic’s exclusion after refusing to relax red lines on autonomous weapons and domestic surveillance underscores that ethical constraints now have real commercial consequences. ([ksl.com](https://www.ksl.com/article/51491733/pentagon-reaches-agreements-with-top-ai-companies-but-not-anthropic))
For the race to AGI, this militarization of frontier AI accelerates investment in robustness, multi‑modal situational awareness and agentic autonomy under extreme constraints—all capabilities that bleed back into civilian systems. It also deepens the alignment between national security establishments and specific corporate labs, which may tilt resource flows (compute allocations, subsidies, export carve‑outs) toward those willing to accept broader military use. At the same time, Anthropic’s stance shows that safety‑oriented labs can draw a line and still remain commercially viable—especially if, as with Mythos, they are seen as essential for cyber defence. The net effect is a faster march toward highly capable, tightly integrated AI agents operating in opaque environments, with governance frameworks struggling to keep up.


