On February 8, 2026, Chinese outlet Guancha reported that Amazon, Google (Alphabet), Microsoft, Meta and Oracle plan to spend around $700 billion on AI‑related capital projects in 2026. Analysts warn the unprecedented AI capex is driving up chip prices, diverting construction labor, and squeezing other sectors’ access to capital and resources.
This article aggregates reporting from 4 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
The numbers being thrown around for AI capex in 2026 are now at the scale of sovereign defense budgets, not normal IT refresh cycles. If Amazon, Alphabet, Microsoft, Meta and Oracle do collectively push toward $700 billion in AI‑driven capital spending this year, they are effectively locking in a hardware and data-center buildout that will define the computational landscape for the rest of the decade. That accelerates the frontier of what’s technically possible—bigger models, longer context, richer multimodality—but it also creates real frictions in chips, power and skilled labor.
Guancha’s synthesis of Western reporting captures a growing discomfort: AI is soaking up electricians, construction crews, and memory chip capacity to the point where smartphones and non‑AI construction projects are being delayed or repriced. In macro terms, this looks like a classic ‘crowding out’ dynamic where one sector’s gold rush can starve others of oxygen. For the race to AGI, the implication is paradoxical. On one hand, this wall of capital almost certainly brings more compute to bear on frontier research faster than most timelines assumed. On the other, it raises the risk of political and social blowback if voters perceive AI as making life more expensive while the payoff remains abstract.
We are now well past the point where AI progress is limited by model ideas alone; it is constrained by grid capacity, supply chains, and public patience for the externalities of hyperscale infrastructure.



