Your coding agent keeps hitting rate limits. Anthropic just doubled them overnight by buying 220,000 GPUs from Elon Musk's supercomputer.


The Deal

Anthropic signed an agreement with SpaceX to use all compute capacity at Colossus 1 — the Memphis-based AI supercomputer originally built by xAI. The numbers:

Metric Value
GPUs 220,000+ NVIDIA GPUs
Power 300+ megawatts
Location Memphis, Tennessee
Timeline Available within the month

Colossus 1 was built in 122 days. xAI then doubled it to 200,000 GPUs in 92 additional days. Current composition: 150,000 H100s, 50,000 H200s, 30,000 GB200s.

Anthropic also uses AWS Trainium, Google TPUs, and other NVIDIA clusters. This adds another 300MW to their infrastructure stack.

Immediate User Benefits

Three changes effective May 6, 2026:

  1. Claude Code rate limits doubled — Pro, Max, Team, and Enterprise plans get 2x their 5-hour limits
  2. Peak hour throttling removed — No more reduced capacity during high-demand windows for Pro/Max
  3. API limits raised for Opus models — Significantly higher request ceilings for Claude Opus 4.x

Reddit users report hitting 23% weekly usage on Max plans with just two maxed 5-hour sessions. The doubling targets exactly this pain point.

The Orbital Twist

The agreement includes something weirder: Anthropic "expressed interest" in partnering with SpaceX to develop multiple gigawatts of orbital AI compute capacity.

Why space? Two physics advantages:

  • Continuous solar power — No night cycles, weather, or grid dependency. Orbital arrays can generate uninterrupted electricity.
  • Natural heatsink — The vacuum of space provides infinite cooling capacity, sidestepping the water-intensive cooling terrestrial data centers require.

The economics are brutal. Andrew McCalip's calculator estimates a 1 GW orbital data center costs $42.4 billion — nearly 3x a ground equivalent. SpaceX's own regulatory filing questioned whether AI workloads in orbit can ever turn a profit.

But Musk acquired xAI specifically to make orbital data centers reality. His roadmap targets 1TW/year of space-based compute capacity. Anthropic is now the first major AI lab to publicly express interest.

Context: The Compute Arms Race

This isn't random infrastructure expansion. It's competitive positioning:

  • Cursor reportedly in talks for $2B+ at $50B+ valuation with $6B run-rate enterprise revenue
  • Cognition reportedly seeking funding at $25B valuation (2x their $10.2B mark from September 2025)
  • Google committed 1 million TPU chips to Anthropic partnership
  • Amazon expanding Anthropic inference capacity in Asia and Europe for compliance/residency requirements

Anthropic explicitly states they partner with "democratic countries whose legal and regulatory frameworks support investments of this scale." The international expansion targets regulated industries — financial services, healthcare, government — that require in-region infrastructure.


What Surprised Me

The orbital compute bit. Not the Colossus 1 deal — that's straightforward capacity rental. But Anthropic publicly signaling interest in gigawatt-scale orbital infrastructure signals something deeper.

Either they genuinely believe terrestrial power and cooling constraints will bottleneck AI growth within 5 years, or they're hedging against regulatory friction on ground-based expansion. Both are plausible.

The immediate doubling of rate limits is the practical takeaway. If you've been hitting Claude Code ceilings, May 6 was a good day. But the orbital interest is the headline that'll age weirdly over the next decade.

Sources