On February 5, 2026, Alphabet executives told investors that 2026 capital expenditure could reach $175–185 billion, roughly double 2025 levels, with most of the spend going to AI infrastructure such as servers, data centers and networking gear. The company said it will keep hiring in key areas like AI and cloud while its Google Cloud unit posted faster growth than Microsoft Azure in the latest quarter.
This article aggregates reporting from 4 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Alphabet’s guidance that it may spend up to US$175–185 billion in capex this year, with AI infrastructure as the dominant line item, is a radical escalation of the compute arms race. That’s almost twice last year’s US$91.45 billion and materially above what many analysts had pencilled in. ([businesstimes.com.sg](https://www.businesstimes.com.sg/companies-markets/telcos-media-tech/google-parent-alphabet-says-it-could-double-capital-spending-2026)) The company is effectively telling markets that AI demand—across Search, Cloud and DeepMind—is constrained more by the pace at which it can deploy servers and data centers than by appetite for its products.
This has two implications for the AGI race. First, it reinforces that frontier progress is now tightly coupled to industrial‑scale infrastructure. Alphabet’s full‑stack strategy—designing its own TPUs, models and applications—means these capex dollars translate fairly directly into more training runs, larger context windows and richer agent platforms. Second, such spending signals to rivals (notably Microsoft, Meta and Amazon) that any retreat on AI investment risks ceding long‑term advantage in both model quality and distribution, even if near‑term margins suffer. We’re watching a classic prisoner's‑dilemma dynamic: each hyperscaler feels compelled to overspend on AI to avoid falling irretrievably behind.
For smaller labs and startups, Alphabet’s capex posture is a reminder that competing at the raw‑compute frontier will be nearly impossible; their best shot is to specialise in alignment, evaluation, vertical agents or lightweight models that piggyback on hyperscaler infrastructure.



