Alphabet told investors on February 4–5, 2026 that it expects 2026 capital expenditures to reach $175–185 billion, nearly double 2025 levels. Management said most of the spending will go into AI infrastructure and Gemini-driven cloud growth, even as analysts warn about pressure on margins and free cash flow.
This article aggregates reporting from 4 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Alphabet’s decision to pour up to $185 billion into capital spending in 2026 is one of the clearest signals yet that AI is entering a capital‑intensive, infrastructure‑driven phase. The bulk of this money will go into data centers, custom TPUs, and the compute backbone required to train and serve frontier Gemini and DeepMind models at scale. That effectively locks in Alphabet as one of a tiny handful of firms able to run multi‑hundred‑billion parameter models on dedicated silicon, with a global cloud footprint to monetize them.
Strategically, this is a bet that AI demand will outstrip current capacity for years, and that owning the stack—from chips to models to apps—will justify enormous near‑term margin compression. It also intensifies the arms race dynamic: Microsoft, Meta, Amazon and others are already guiding to record AI capex, but Alphabet’s number alone rivals the GDP of a mid‑sized country. For everyone else in the ecosystem, from startups to national clouds, this raises the bar on what “serious” AI investment now looks like.
The competitive implication is that the frontier is consolidating around a few hyperscalers. Their decisions about safety, openness, and API pricing will shape how quickly downstream players can experiment and how broad the innovation base around AGI becomes.



