OpenAI’s head of robotics and consumer hardware, Caitlin Kalinowski, resigned on March 8, 2026 over the company’s recent agreement to deploy its AI models on Pentagon classified networks. She said the deal moved too quickly and failed to adequately address risks around mass surveillance and lethal autonomous weapons, while OpenAI insists the contract includes strict safeguards.
This article aggregates reporting from 5 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
This resignation is the clearest sign yet that frontline technical leaders are increasingly uneasy with how quickly frontier AI is moving into hard national‑security use cases. OpenAI’s classified-cloud deal with the Pentagon effectively normalizes deploying state‑of‑the‑art models in live targeting, intelligence and logistics workflows. That tightens the feedback loop between military budgets and general‑purpose AI capabilities, which historically tends to accelerate progress by guaranteeing demand for ever-larger models and specialized infrastructure.
At the same time, Kalinowski’s exit gives organizational form to an emerging internal opposition bloc across top labs. Anthropic reportedly walked away from its own Pentagon contract, and OpenAI staff had already been agitating for stronger safety and governance structures. Those frictions won’t stop militarization of AI, but they can reshape the terms—pushing for explicit red lines on surveillance and lethal autonomy, and for more transparent oversight of classified deployments. For the race to AGI, this episode underscores a core tension: the same capabilities that make models militarily valuable also make governance mistakes far more consequential.