On March 7, 2026 AI Insider reported that Anthropic’s Claude app has overtaken ChatGPT in U.S. daily mobile downloads and reached 11.3 million daily active users, with more than 1 million sign‑ups per day since late February. The growth follows the Pentagon’s decision to label Anthropic a supply‑chain risk, even as Microsoft, Google and AWS reaffirmed they will keep offering Claude for non‑defense workloads and a separate Mozilla partnership saw Claude Opus 4.6 uncover 22 Firefox security vulnerabilities in two weeks.
This article aggregates reporting from 7 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Claude’s surge shows that ethics can be a growth strategy, not just a compliance cost. By refusing to loosen guardrails for the Pentagon and getting blacklisted as a supply‑chain risk, Anthropic turned a Washington fight into a consumer narrative: the ‘principled alternative’ to ChatGPT. Download and usage data from Appfigures and Similarweb suggest that, at the margin, users are willing to switch on values, not just features. ([techcrunch.com](https://techcrunch.com/2026/03/06/claudes-consumer-growth-surge-continues-after-pentagon-deal-debacle/?utm_source=openai))
At the same time, the Mozilla partnership highlights Claude’s strengths as a code‑analysis and security tool, uncovering 22 Firefox vulnerabilities in two weeks, 14 of them high‑severity. ([techcrunch.com](https://techcrunch.com/2026/03/06/anthropics-claude-found-22-vulnerabilities-in-firefox-over-two-weeks/)) That combination—moral branding plus concrete technical wins—strengthens Anthropic’s hand with enterprises and cloud partners even as it loses direct Pentagon revenue. For the broader race to AGI, it underscores that influence won’t be measured only in benchmark scores: distribution, public trust and perceived safety posture are becoming equally strategic levers.



