One Million Orbital Data Centers: AI's Frontier
By Alexander Cole

Image / technologyreview.com
One million data centers in orbit—AI's next frontier, but the price tag is astronomical.
MIT Technology Review’s explainer captures a bold bet: in January, SpaceX filed with the FCC to loft up to one million data centers into Earth’s orbit, with the aim of unleashing AI without worsening Earth’s environmental footprint. The idea isn’t simply “more compute up there” but “compute where the grid and water resources are strained the most.” The piece also flags parallel ambitions from other giants. Jeff Bezos has floated the idea that the tech industry will migrate large-scale computing to space. Google has talked about lofting data-crunching satellites and aims to test a constellation of about 80 units as early as next year. And Starcloud, a Washington State startup, has already flown a satellite fitted with an Nvidia H100 GPU—the first orbital test of an advanced AI chip—and envisions orbital data centers growing to Earth-scale sizes by 2030.
Proponents argue that moving data centers off-planet could address two core pain points on Earth: water use for cooling and the burden on energy grids. In space, you’re not battling hot water towers and cooling towers; you leverage vacuum, solar power, and a self-contained thermal loop. On the surface, the logic reads clean: if you can cool with the vacuum of space and power with sun-facing arrays, you shrink Earth’s operational footprint while still delivering AI throughput. The catch is that the signal to scale is enormous, and the physics and economics are messy.
The MIT piece catalogs four prerequisites to make orbital data centers feasible—ranging from robust power and cooling systems to launch logistics, radiation-hard hardware, and debris-management risk. A practical read is that this is less a single invention and more a supply-chain puzzle: building modular, repairable, radiation-tolerant hardware that can survive months in orbit, while guaranteeing updates, uptime, and safe disposal in a congested near-Earth environment. The excitement about a potential 2030 horizon sits against a long list of real-world constraints: launch costs, on-orbit maintenance, reliability over multi-year lifetimes, and the regulatory dance around orbital slots and debris mitigation.
A handful of concrete signals from the landscape color the bets. Starcloud’s test flight with an H100 GPU proves that high-performance AI silicon can function in orbit, a necessary but far from sufficient milestone. Google’s ambition to test a satellite-based data-crunching network, and Bezos’s broader framing of space-centered compute, underline a systemic push: if terrestrial cooling and energy walls stagnate, a space-based path could become politically and technically attractive—at least as a hedge.
Analysts and practitioners should separate hype from execution risk. Here are a few angles to watch next:
In the near term, this is a bold thesis, not a shipped product. The quarter-to-quarter reality for most builders remains: orbital data centers are a strategic bet, a signal about where compute infrastructure could go, and a reminder that decarbonizing AI’s compute footprint is as much about logistics, policy, and hardware resilience as it is about silicon speed.
Analogy time: moving a data center to orbit is like shipping a skyscraper into the sky and hoping its elevators outlast the weather, the debris, and the need to patch software without on-site crews. It promises leaner Earth-side power and water use—if the physics, finance, and fleets of launches all line up.
Sources
Newsletter
The Robotics Briefing
Weekly intelligence on automation, regulation, and investment trends - crafted for operators, researchers, and policy leaders.
No spam. Unsubscribe anytime. Read our privacy policy for details.