Social
The Federal
India Today
Business Today
Hindustan Times
+2
6 outlets
Tuesday, February 10, 2026

Anthropic safety chief quits, warning ‘world is in peril’ from AI risks

Source: The Federal
Read original

TL;DR

AI-Summarizedfrom 6 sources

Anthropic’s head of AI safety, Mrinank Sharma, resigned effective February 9, 2026, publishing a long resignation letter warning that the “world is in peril” from interconnected crises including AI. Multiple Indian and global outlets reported his departure on February 10, citing his concerns about values being sidelined inside fast‑moving AI organizations.

About this summary

This article aggregates reporting from 6 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.

6 sources covering this story|1 company mentioned

Race to AGI Analysis

Sharma’s resignation is a rare public flashpoint inside a top frontier lab, and it lands at a moment when Anthropic is raising tens of billions of dollars and positioning itself as a safety‑first alternative to OpenAI. His letter doesn’t allege specific misconduct, but it does suggest that even organizations built around AI safety struggle to align day‑to‑day decisions with their stated values once capital, competition and timelines tighten. For Race to AGI readers, this is a live case study in how governance and culture hold up under extreme scaling pressure.

The move also heightens scrutiny of Anthropic’s safeguards work just as it leans hard into agentic systems like Claude Opus 4.6 and Claude Code, tools explicitly designed to automate more complex workflows. If the head of safeguards feels the world is in peril and chooses to step away rather than keep shaping those systems from the inside, that will amplify calls for external oversight and more transparent safety cases around powerful models. Competitively, this doesn’t immediately change Anthropic’s technical trajectory, but it may influence how regulators, partners and talent perceive its ability to self‑regulate while racing toward AGI‑class capabilities.

Impact unclear

Who Should Care

InvestorsResearchersEngineersPolicymakers

Companies Mentioned

Anthropic
Anthropic
AI Lab|United States
Valuation: $183.0B

Coverage Sources

The Federal
India Today
Business Today
Hindustan Times
Gadgets360
Indiablooms
The Federal
The Federal
Read
India Today
India Today
Read
Business Today
Business Today
Read
Hindustan Times
Hindustan Times
Read
Gadgets360
Gadgets360
Read
Indiablooms
Indiablooms
Read