URE — Unified Resilience Engineering

The human brain is hardwired for the tangible. We understand what we’ve touched, carried, walked through. Ask an American to picture 2.3 kilos of meat and you’ll get a blank stare. Tell a European to walk 80 feet down a corridor instead of counting three doors, and they’ll overshoot it. We don’t process abstract units — we process experience.

Now scale that problem up.

Austin, Texas sprawls across 305 square miles. San José, California covers 180. Each one draws roughly one gigawatt from the grid. That’s an entire city — hospitals, traffic lights, air conditioning, schools, everything humming at once. One gigawatt.

Outside Houston, a single data center campus sits on less than one square mile. It draws the same gigawatt.

Condense everything Austin consumes — every house, every hospital, every streetlight — into a footprint smaller than a neighborhood park. That’s what a hyperscale data center is. And more than a dozen of them are being built across the United States right now.

Want to get serious?

A grizzly bear weighs about 600 pounds. The rat behind your local dumpster weighs about two. That’s a 300-to-1 ratio — roughly the same ratio between the sprawl of Austin, Texas, and the footprint of a single gigawatt data center campus. Except the grizzly doesn’t consume 300 times more oxygen. The data center consumes every watt the city does.

That’s the beast we’ve built. A rat with a grizzly’s appetite — lab-made, power-dense, and nothing in the old playbook was designed to feed it. What goes in as power comes out as heat. Every watt. No exceptions.

This is a physics problem. When you compress a gigawatt into a square mile, everything downstream — power conditioning, thermal capacity, transient management, reliability — behaves differently than anything we’ve operated before. The rules that governed traditional data centers don’t scale to AI infrastructure energy density. The playbooks don’t transfer. The dashboards lie.

There are no playbooks written for the AI era — not for AI Factories, Token Factories, or Deep Training Facilities. We’re talking about ML and RL training jobs spanning thousands of nodes on a typical Tuesday.

For perspective: five years ago, the most powerful computer on Earth was Fugaku — 159,000 nodes, 432 racks, a billion dollars of Japanese national investment, drawing 30 megawatts. Countries bragged about it. Scientists waited in line for access. It was a generational achievement.

Today, a NeoCloud launches a “modest” 300-megawatt facility — ten Fugakus worth of power — and it doesn’t even make headlines.

Physics doesn’t negotiate. URE starts there.

My work — building data centers from scratch in highly constrained environments — shaped a particular vision of how the stack is actually built. Not how it’s drawn in architecture diagrams, but how it behaves under load, under budget pressure, and under the laws of thermodynamics.

You can read more about my background at /stefano-schotten/.

This world isn’t one-size-fits-all. URE connects the seams between operational layers — power, thermal, compute, network, cost, compliance — and turns them into reliable deliverables with real economic value. Not another dashboard. Not another abstraction. A reasoning method built from twenty years of scars for an era that doesn’t yet have playbooks.

I break the infrastructure stack into five operational layers: foundation, AI-era baseline, infrastructure and hardware, software and orchestration, and economics — each one inheriting the failures of the layer below it. That framework is the backbone of everything I publish here.


Links: About · LinkedIn · Email