Anthropic economists have introduced an “AI Exposure Index” and early‑warning framework to monitor how large language models like Claude could affect white‑collar employment over time. A new paper and companion analyses published March 5–8, 2026, conclude that while layoffs are limited so far, highly exposed occupations—such as programmers and customer service reps—are already showing early signs of hiring slowdown.
This article aggregates reporting from 2 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Anthropic is doing something most labs only talk about in ethics slide decks: building a live, quantitative system to track who is actually getting hit by AI, job by job. The AI Exposure Index blends theoretical capability assessments with real Claude usage data and labor statistics, giving a more grounded view than the usual “300 million jobs at risk” headlines. Early signals—like a noticeable slowdown in hiring for highly exposed roles among 22–25‑year‑olds—suggest disruption is starting not with mass layoffs, but with doors quietly closing on new entrants.
For the AGI race, this is a form of institutional self‑regulation. If one of the leading labs is effectively publishing its own “AI labor radar,” it raises the bar for what responsible scaling should look like. It also arms regulators and central banks with a concrete metric for when AI‑driven shocks are moving from theory into the real economy. That, in turn, will shape how aggressively governments tolerate—or push back on—ever‑more capable systems.


