home·articles·2026-05-09

the alliance map is dead. compute scarcity killed it

anthropic just signed a third compute lane with spacex while publicly skeptical of musk. that's not a partnership story. that's a capacity confession from every hyperscaler in the room.

the deal that shouldn't have happened

on thursday, anthropic locked in a compute pact with spacex, with a $119b chip fab proposal sitting alongside it. read that sentence twice. the lab whose ceo has spent two years positioning himself as the adult in the room on agi safety just signed a capacity deal with the founder of the lab anthropic's policy team treats as the cautionary tale. and they did it on top of an existing aws trainium commitment and a gcp tpu lane that was supposed to be the answer.

that is not a procurement decision. that is a capacity confession. when a lab adds a third compute partner, and that third partner is the one its policy posture is least compatible with, the only honest read is that neither of the first two could commit the 2027-2028 capacity the roadmap demands. the alliance map we all internalized in 2025, anthropic-aws-google, openai-microsoft, xai-solo, is being torn up in real time, and the tearing is happening on the supply side, not the model side.

the frame: multi-lane scramble

the consensus read on hyperscaler-lab pairings has been that they are stable, exclusive, and strategic. that read is roughly six months out of date. the actual dynamic is a multi-lane scramble in which every frontier lab is quietly negotiating capacity wherever it can find it, because no single hyperscaler can clear the 2027 ask. call it the multi-lane scramble: the pendulum has swung from hyperscaler exclusivity to a procurement free-for-all dressed up as strategic optionality.

the reason is arithmetic. morgan stanley now has the five hyperscalers spending $1.1t on capex in 2027, which is roughly all non-tech s&p 500 capex combined, in one year, from five companies. the implied 2030 ai revenue these hyperscalers need to clear that bar is around $1.5t. even if you believe that number, the more interesting question is what happens to the labs in the meantime, because hyperscaler capex on this scale doesn't translate to lab-allocated capacity. it translates to hyperscaler-allocated capacity, with the lab as one customer among many.

the receipts

start with tsmc. 2nm is fully allocated through 2028. broadcom and amd have both been telling investors that compute demand is materially outstripping commitments. when broadcom's custom silicon book is sold out and amd's mi-series is on allocation, that is not a 'we are growing fast' message; that is a supply-side signal that the people writing the checks know there isn't enough silicon to go around at any reasonable price.

then the labs themselves. xai is running colossus expansion while spacex prices a $119b fab. that is one founder building a frontier lab and pricing his own foundry because he has done the math on what waiting in the tsmc queue costs him. anthropic, meanwhile, is the lab whose distribution surface story is weakest of the big three; without claude on a default-surface browser or os, every dollar of capex has to be justified by api revenue and claude.ai growth alone. and they still added a third compute lane, which means the api and claude.ai numbers are big enough to require it, or the roadmap requires it on faith. either way, the existing aws and gcp commitments did not cover.

openai's situation rhymes. the microsoft exclusivity story quietly became a non-exclusivity story over the past year, with oracle, coreweave, and softbank all booked into the capacity stack. you don't sign with three additional infra providers if your lead patron is delivering on the curve.

the power side compounds it. the gpu shortage narrative is dead; the binding constraint is now power interconnect queues and grid siting. meta, microsoft, and amazon have all signed nuclear ppas, and the orbital-compute framing in the spacex-anthropic deal is, more than anything, permitting arbitrage. you can't site a gigawatt campus in virginia anymore without a five-year interconnect fight. you can, in principle, put it in orbit. the technical viability of orbital compute on any 2027-2028 timeline is, charitably, optimistic. but the framing tells you what binding constraint these labs think they're solving for, and it is not silicon. it is watts and permits.

what could make this read wrong

the steelman is that this is overreading a procurement story. labs sign capacity contracts all the time, and a third lane on top of two existing ones could just be hedging, not capitulation. it is also possible that the 2027 hyperscaler capex actually does clear in time, and the panic-buying we're seeing is labs front-running their own roadmaps rather than responding to a real shortfall. the orbital-compute framing in particular could be a press-release flourish attached to a much more conventional terrestrial deal that uses spacex as a launch logistics vendor for ground-based fabs. you can construct that story.

the other counter is that compute demand cools. if the agent layer doesn't translate to the token volumes the 2027 capex is underwriting, the scramble unwinds and we end up in a glut by 2028. that's the bear case morgan stanley acknowledged in the same note. it would also be the first time in the modern hyperscaler era that capex front-ran demand and was wrong about the direction.

our read is that even granting both counters, the alliance map still doesn't survive. hedging on this scale, with this many lanes, is itself the structural change. the labs no longer trust their hyperscaler patrons to deliver, and the hyperscalers no longer treat their lab partners as exclusive. the relationship has become transactional in both directions, and transactional relationships do not produce the kind of capital alignment the 2025 alliance map implied.

what to do with this

three things follow.

first, stop pricing labs on their hyperscaler patron. the 2025 mental model where anthropic was an aws subsidiary in waiting and openai was a microsoft subsidiary in waiting is no longer load-bearing. the labs are going to spend 2026 and 2027 stitching together capacity from whatever combination of hyperscalers, neoclouds, and bespoke fabs clears the watts, and the resulting capital structure is going to look much messier than the clean exclusive pairings the consensus is still using.

second, watch the power side, not the silicon side. the most important number for any frontier lab in the next 18 months is not its parameter count or its benchmark line; it is its committed gigawatts and the year those gigawatts come online. labs that lock in power early get to ship; labs that don't, won't, regardless of how good their interpretability story is. the hidden ai supply chain is where the alpha is.

third, the policy game changes. if labs are no longer captive to a single hyperscaler, the regulatory-capture pitch ('only the largest labs can supervise frontier risk') gets harder to make, because the largest labs are visibly scrambling for capacity in ways that look nothing like calm supervision. the pre-release review push is going to keep getting louder precisely because the supply side is getting noisier. expect the safety framing to intensify in direct proportion to how chaotic the capacity scramble looks.

the 2025 alliance map was a useful fiction. the 2026 map is a procurement spreadsheet with too many open columns. anyone still drawing arrows between labs and hyperscalers as if the relationship is exclusive is reading a document that no longer describes the world.

← back to articles