Meta’s Prometheus data center in New Albany, Ohio is scaling to 1.2 GW. To get there, they’re building behind-the-meter natural gas turbines — two 200 MW Socrates generation facilities, supplied by dedicated gas pipelines, isolated from the grid. In Virginia, the same story plays out with diesel generators, enough of them that it became the top legislative concern entering the 2026 session.
The industry talks about PUE as if it were a verdict on environmental efficiency. It isn’t. PUE measures one envelope — the data center facility. Total facility power divided by IT equipment power. A PUE of 1.3 means 30% overhead for cooling, lighting, and support systems. That’s the metric everyone optimizes, the number that shows up in sustainability reports, the figure that earns applause at conferences.
But PUE stops at the meter.
The Heat Before the Meter
A simple-cycle gas turbine — the kind you deploy fast, behind the meter, to avoid three-year grid interconnection queues — operates at roughly 35-40% electrical efficiency. For every watt of electricity it delivers, it dumps 1.5 to 1.9 watts of heat into the atmosphere through exhaust stacks, radiators, and waste heat rejection. That heat never appears in any PUE calculation. It doesn’t exist in the data center’s energy model. It’s invisible.
Run the numbers for a 1.2 GW facility powered by simple-cycle gas turbines at 38% efficiency:
| Component | Power / Heat | Notes |
|---|---|---|
| Fuel burned (thermal input) | ~3.16 GW | Required to deliver 1.2 GW electrical |
| Waste heat from power generation | ~1.96 GW | Exhausted locally, outside PUE boundary |
| IT equipment heat | 1.2 GW | Every watt in becomes a watt of heat out |
| Cooling and facility overhead (PUE 1.3) | ~0.36 GW | Also becomes heat |
| Total heat dumped into the local environment | ~3.52 GW |
PUE says 1.3. The local atmosphere sees 3.52 GW of thermal load for 1.2 GW of compute. That’s an effective thermal ratio of 2.93.
In BTU terms, that’s roughly 12 billion BTU per hour — continuously — concentrated in whatever rural county was lucky enough to attract the investment. For reference, that’s the sustained thermal output of roughly 35,000 residential furnaces running full blast, 24 hours a day, 365 days a year.
PUE didn’t vanish because the metric is wrong. PUE vanished because the boundary moved. When you generate your own power on-site, the “first mile” of energy conversion dumps nearly twice the data center’s own heat load into the surrounding environment before a single GPU processes a single token. The data center’s PUE could be a pristine 1.05 and the local thermal reality would still be brutal.
The Diesel Variant
The gas turbine scenario is the clean version. In Virginia, data centers are running diesel generators — not just for backup, but for sustained prime power. Diesel generators operate at roughly 35% electrical efficiency and add particulate emissions, NOx, and the logistics of fuel delivery at industrial scale.
At 1.2 GW on diesel, the fuel math gets visceral:
| Metric | Value |
|---|---|
| Diesel consumption at full load | ~120,000 gallons per hour |
| Fuel cost per hour ($3.50-$4.70/gal) | $420,000 - $564,000 |
| Fuel cost per day | $10 - $13.5 million |
| Fuel cost per year | $3.7 - $4.9 billion |
| Waste heat from generation | ~2.23 GW |
Those are the numbers nobody puts in the sustainability report. The PUE slide looks great. The diesel trucks running 24/7 to keep the generators fed don’t make the keynote.
The Yacht Problem
There’s a certain irony here that’s hard to ignore. A 200-foot superyacht burns roughly 130 gallons of diesel per hour just sitting at anchor — generators running, hotel systems on, nobody going anywhere. Under way at cruising speed, consumption climbs past 500 gallons per hour. A full tank on the larger vessels holds 50,000 gallons. Annual fuel budgets for mega-yachts routinely exceed $2 million, often hitting $4 million.
A 1.2 GW data center on diesel burns what a fleet of 240 superyachts burns. Per hour. Every hour.
The same executives who commission these yachts sit in boardrooms discussing PUE optimization and carbon neutrality pledges. I won’t call it hypocrisy — progress charges its tolls, and data centers deliver genuine economic value that yachts don’t. But the contrast deserves a look, at minimum, when someone presents a sustainability slide that starts and ends at the meter.
Meanwhile, in Orbit
Then there’s the idea of putting data centers in space.
Starcloud — formerly Lumen Orbit, backed by Y Combinator, NFX, and scout funds from Andreessen Horowitz and Sequoia — launched its first satellite in November 2025 with a single NVIDIA H100 onboard. They trained an LLM in orbit. They raised $21 million. They’re planning a second satellite for October 2026 with “100x the power generation” of the first.
I’m going to take this seriously for exactly as long as the physics deserves.
The Spec Sheet Problem
I wonder if anyone at Starcloud has read the NVIDIA DGX H100 User Guide — specifically the environmental requirements section. It’s not buried. It’s right there in the manual:
- Operating temperature: 5°C to 30°C
- Relative humidity: 20% to 80%, non-condensing
- Vibration: “Away from sources of vibration or physical shock”
- Environment: “Clean, dry, and free of airborne particles”
- Airflow: 1,105 CFM front-to-back per system
Space offers none of this. A rocket launch subjects payloads to sustained vibration loads exceeding 6g and acoustic pressures above 140 dB. Once in orbit, the thermal environment swings between -150°C in shadow and +120°C in direct sunlight. Humidity is zero — vacuum doesn’t do “non-condensing.” And the 1,105 CFM of front-to-back airflow the H100 requires? There’s no air.
Which raises a sincere question: would NVIDIA honor the warranty on that GPU?
I’m not being rhetorical. Data center operators negotiate warranty terms, SLA response times, and advance replacement logistics as part of every hardware procurement. NVIDIA’s specifications define the operating envelope. Outside that envelope, you’re on your own. One H100 in orbit is a proof-of-concept press release. A fleet of them is an uninsurable asset running outside every parameter the manufacturer specifies.
The Cooling Problem Nobody Mentions
On Earth, we reject heat through conduction and convection — coolant carries heat from the silicon to a radiator, air or water carries it away. Liquid cooling works because gravity drives convective circulation and because there’s an atmosphere to reject heat into.
In orbit, the only mechanism for heat rejection is radiation. No air, no convection, no conductive path to anything. You need massive radiator panels — the International Space Station uses 1,400 square feet of radiators to reject about 70 kW. A single DGX H100 system pulls 10.2 kW. A SuperPod pulls 1.3 MW. The radiator surface area required for a SuperPod would dwarf the solar arrays needed to power it in the first place, and both are competing for the same structural real estate on a satellite that has to survive orbital debris at 7.8 km/s.
The Latency Problem That Kills the Use Case
Even if you solved power, cooling, and reliability — what exactly are you computing up there?
LEO satellites at 550 km altitude have a minimum round-trip latency of 20-40 milliseconds back to Earth. That sounds fast until you remember what AI workloads actually need. NVLink latency between GPUs in the same rack is measured in sub-microseconds. InfiniBand between racks runs in single-digit microseconds. AI training depends on gradient synchronization across thousands of GPUs — adding 20-40ms per round trip to Earth for any data that needs to leave the satellite makes distributed training architecturally unworkable. AI inference at scale needs low tail latency to serve users — 40ms of orbital propagation delay before the response even starts traveling isn’t a rounding error.
So you can’t train models that need more than one satellite’s worth of compute. And you can’t serve inference at competitive latency. What’s left?
The Weight of Getting There
In 2017, I was involved at an early stage in brainstorming for an M2M project where a customer considered deploying a stationary satellite to cover a specific area for logistics security. The project was born dead. Not because the idea lacked ambition — it lacked contact with the cost structure of orbital operations, which is the kind of thing that kills ambition efficiently.
Let’s do the math with the smallest meaningful unit — a single NVIDIA DGX SuperPod. Thirty-two DGX H100 systems plus InfiniBand networking. The compute nodes alone weigh 4,174 kg. Add switches, cabling, power distribution, and whatever thermal system you’ve invented that doesn’t exist yet — call it 6,000 kg conservatively.
| Vehicle | Cost per kg to LEO | SuperPod launch cost |
|---|---|---|
| Falcon 9 (dedicated) | ~$2,720 | ~$16.3 million |
| Falcon Heavy | ~$1,400 | ~$8.4 million |
| Starship (projected, single-use) | ~$1,200 | ~$7.2 million |
| Starship (projected, 20+ reuses) | ~$30-100 | ~$180-600K |
Even at the most optimistic Starship projections — which haven’t been demonstrated commercially — you’re adding millions in launch costs to hardware that already costs millions and that you cannot service, cannot upgrade, and cannot physically reach when it fails.
The Serviceability Problem We Already Solved — By Quitting
Microsoft tried something far less ambitious with Project Natick — submerged data centers in the ocean. The results were actually promising: fewer server failures than land-based equivalents over two years of zero human intervention. The thermal environment was more stable than air cooling. On paper, it worked.
Microsoft shelved it anyway. The reason was blunt: AI workloads require continuous work on the racks. GPU clusters need firmware updates, failed components swapped, network reconfigurations, capacity expansions. You can’t do that through a pressure hull at the bottom of the ocean — and people criticized Natick precisely for that serviceability gap, even though you could at least winch the thing back to the surface.
Now take that same criticism and move it 400 km straight up, to a platform traveling at 7.8 km/s, where a service call requires a rocket.
We struggle with asset depreciation right here on Earth. OCP and OEM suppliers can’t handle simple PSU replacements while honoring SLAs — I’ve lived this. A power supply fails in a colo in Virginia, and you’re waiting three weeks for a replacement that was supposed to arrive in four hours. NVIDIA’s upcoming Vera Rubin platform specifies 45-degree Celsius water cooling with no chillers — a genuine engineering achievement that took years of thermal design, CFD simulation, and materials science to validate. Thousands of engineers working on CFD, fluid mechanics, thermodynamics, custom cold plates and turbine designs for problems that exist right here.
I’m not saying orbital computing is impossible. I’m saying that sometimes, someone needs to look across the table and say: buddy, you’re too far away.
There’s a saying — there’s no such thing as “throwing something away.” You’re just putting it outside. These proposals take that literally.
What’s Actually Being Solved
The liquid cooling market hit $4.8 billion in 2025 and is projected to cross $6 billion this year. Direct-to-chip systems hold 43% market share. Immersion cooling is growing at 27% annually. Vera Rubin accepts 45-degree supply water — warm enough that ambient air rejection works without mechanical chillers in most climates. That single specification eliminates an entire layer of energy consumption from the cooling chain. It’s not glamorous. It won’t get funded by Sequoia. But it’s the kind of engineering that actually bends the cost curve.
Creative Solutions for Non-Existent Problems
The pattern isn’t new. Every gold rush produces two kinds of companies: the ones digging for gold, and the ones selling shovels to people who’ll never find any. Space data centers belong to a third category — the ones selling the idea that you don’t need shovels at all, because the gold is on the moon.
The billionaires who got bored spending half a billion on yachts that burn hundreds of gallons of diesel per hour now say they’re worried about PUE. And the solution they fund isn’t better power generation, or district heating integration, or waste heat recovery, or grid modernization. The solution is orbital computing.
There are real problems to solve. The thermal load from on-site power generation is one of them — a gigawatt-scale externality that nobody measures and nobody reports. The three-year queue for grid interconnection is another. The fact that we’re deploying 600 kW racks cooled by 45-degree water into buildings designed for 5 kW racks cooled by chilled air — that’s a real problem, and solving it requires real engineering, done here, on Earth, where the physics is hard enough already.
Everything else is a press release.