Anthropic said on May 6 it signed a deal with Elon Musk’s SpaceX to use the full compute capacity of the Colossus 1 AI supercomputer to scale its Claude assistant. Coverage on May 7 reports Anthropic will get more than 220,000 Nvidia GPUs and over 300 MW of power, with both sides also exploring multi‑gigawatt orbital AI data centers.
This article aggregates reporting from 8 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
This compute deal cements Anthropic as one of the few labs with access to truly frontier‑scale infrastructure. Leasing the entire Colossus 1 supercomputer—over 220,000 Nvidia GPUs and roughly 300 MW of power—gives Anthropic room to train and serve much larger Claude models without immediately bumping into capacity ceilings. In the current landscape, raw compute has become as strategically important as model architecture, and this move significantly narrows the perceived gap between Anthropic and the biggest spenders in AI.
The orbital compute angle is almost as interesting as the immediate capacity boost. If SpaceX and Anthropic seriously pursue multi‑gigawatt space-based data centers powered by solar, it signals a coming shift from land‑and‑grid‑constrained AI to effectively unbounded energy and cooling. That could remove one of the last major bottlenecks on training-time scale. Combined, the Colossus lease and orbital plans underscore how tightly the race to AGI is now intertwined with energy, networking, and industrial‑scale infrastructure, not just clever algorithms.



