OpenAI’s robotics lead Caitlin Kalinowski resigned on March 7, 2026 after the company signed a controversial AI agreement with the U.S. Department of Defense. In a LinkedIn post cited by TechCrunch, she said the deal lacked sufficient guardrails on domestic surveillance and lethal autonomous weapons, and OpenAI confirmed her departure.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
A senior technical leader walking away from OpenAI over its military contract is a reminder that the race to AGI is no longer just about parameter counts and GPUs; it’s also about which values govern deployment. Kalinowski’s resignation makes the Pentagon deal concrete inside the lab: people whose careers are built on advanced robotics are willing to forgo influence and compensation over how those systems might be used. That kind of talent risk becomes part of the cost of pushing aggressively into defense.
Strategically, this deepens the emerging split between Anthropic, which has drawn a hard line against autonomous weapons, and OpenAI, which is taking a “work with the Pentagon, enforce red lines via tech and contracts” posture. ([techcrunch.com](https://techcrunch.com/2026/03/07/openai-robotics-lead-caitlin-kalinowski-quits-in-response-to-pentagon-deal/)) In the short term, OpenAI gains access to privileged use cases and potential government revenue; in the long term, it may pay a reputational and recruiting tax among safety‑minded researchers and engineers. The episode also signals to governments that internal dissent is real even at the most powerful labs, which could strengthen calls for external oversight if companies are seen as moving faster than their own staff are comfortable with.



