OpenAI is recruiting a new Head of Preparedness to lead its safety systems framework, offering an annual salary of $555,000 plus equity. The role will oversee threat models and mitigations for severe AI risks spanning cybersecurity, biosecurity and mental health, and was publicly highlighted by CEO Sam Altman, who called it a ‘stressful’ but critical job.
This article aggregates reporting from 2 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
OpenAI’s Head of Preparedness role is a window into how frontier labs are trying to institutionalize safety at the same time as they race forward on capabilities. The mandate — own threat models, evaluations and mitigations for catastrophic risks across cyber, bio and societal harms — is essentially to be the internal counterweight to the product teams that want to ship ever more powerful models and agents. The compensation package, north of half a million dollars plus equity, signals that OpenAI now views safety leadership as a top‑tier executive function, not a compliance afterthought.([openai.com](https://openai.com/careers/head-of-preparedness-san-francisco/?utm_source=openai))
From an AGI race perspective, this does two things. First, it acknowledges that the systems OpenAI is building are powerful enough to plausibly create systemic risks if misused or misaligned. Second, it hints at a governance model where safety decisions are increasingly formalized through frameworks, metrics and go/no‑go gates, rather than informal judgment calls. Whether that framework genuinely constrains launches or is ultimately overruled by commercial imperatives will be one of the defining questions of the next few years.
If OpenAI succeeds in giving preparedness real teeth, it could become a template other labs feel pressured to copy — especially as regulators look for evidence that companies have serious internal risk management capability, not just glossy model cards.