- Sovereign Data Center Ops
- Full-stack CSP with integrated Security-as-a-Service
- Performance-Optimized Computing — GPU-dense and HPC workloads
- Hands-on AI Lab — automation use cases and integration patterns
- Mapping and building solutions for AI-era infrastructure challenges
- Flushing the seams between physics and tokens
20 years building mission-critical infrastructure - from concrete to kernel.
Resilience is an emergent property. Operational safety is earned — physics doesn’t negotiate, and neither should your architecture.
Hi, I’m Stefano.
I co-founded AMTI in 2005, when I was 22. I’d left Production Engineering to go all-in on building something from scratch. AMTI was born from a simple conviction: do things the right way. Spoiler — it worked.
In the first phase, we worked with Linux servers and connectivity. During those early four years, more than 90% of our customers were in the industrial sector, and that gave us a deep sense of responsibility from day one. Workloads were 24/7. Maintenance windows were rare. We built serious credibility by migrating COBOL applications from Unix systems to Linux runtimes, delivering 10x improvements in performance and availability.
That’s where we got close to Linux primitives. By 2008, we were managing thousands of physical nodes booting NFS-hosted images over PXE, with heavy automation written in bash. We used cgroups on bleeding-edge kernels and chrooted environments to isolate workloads. A few years later, that approach got a name — containers. We were doing it before it had one.
In 2009, IBM recruited us into its channel expansion program. We spent the next few years as a strategic partner — a “virtual branch” in their capillarity plan, serving their geographic growth. It was validating and clarifying. We weren’t the small service provider anymore; we were drifting toward becoming a VAR — and that’s not what I’d built AMTI to be.
In 2010, I went back to school. Night classes in Economics, while running the company during the day. I needed to understand the business layer with the same depth I understood the technical one. That decision shaped everything that came after.
By the end of 2011, another inflection point. Dell was launching its indirect sales model in Brazil, and based on our reputation, we were recruited again — this time as a top-15 Premier partner nationwide. Two major OEMs had now come to us, not the other way around.
From 2011 to 2015, I was a Senior Solutions Architect, designing highly available, mission-critical environments for customers in healthcare, banking, agribusiness, and pharma — industries where downtime isn’t an inconvenience; it’s a liability. We became responsible for more than 80% of all Dell sales in our state and their go-to partner for data center projects.
2015 was the pivotal year. Multiple clients came to us with the same demand: “We need off-site backup, but not cloud — we want data sovereignty.” They trusted us with their most critical workloads. They didn’t trust a hyperscaler with their data.
So we built our first data center. Same conviction we started with — do things the right way. But this time, there were no peers to learn from. Nobody in our market had operated this way. We were pioneering in a space where the playbook didn’t exist — so we wrote our own.
From 2015 to 2021, we scaled 60x. Not by chasing commoditized IaaS, but by understanding what our customers actually needed — someone to own the problem end to end.
We became an MSSP and edge computing provider in 2017, years before either term became a sales deck staple. We offered BaaS, DRaaS, and fully managed services. Our pitch was simple: take care of your business — we take care of your data.
We built baselines, guardrails, and disaster recovery plans. We owned SLO metrics across multiple premises. We ran software-defined data center architecture from day one. And we delivered more than 2,000 consecutive days of uptime.
That number isn’t a marketing claim. It’s what happens when reliability is a culture, not a checkbox.
In 2021, I stepped back from daily operations and moved to the United States. I was granted an EB-2 visa with a National Interest Waiver — through Cyber Resilience. The US government recognized that what I’d spent 16 years building wasn’t just a business. It was expertise in the national interest.
I earned my CISSP — not as a credential to hang on the wall, but because it translated directly into our business. We weren’t just selling data synchronization. We were building a security mental model for our customers — a framework they could trust with the workloads they couldn’t afford to lose.
That’s where URE begins — as an overseas R&D project, sourcing and implementing technology to our standards in Brazil. My last major project was designing a Tier III facility for AMTI, and it pushed us into territory we’d never operated in before. We applied American standards on Brazilian soil: ASHRAE, NIST CSF, PCI, SOC 2, TIA-942, IEEE, AHRI. What had been a scrappy operation built on instinct and Linux primitives was now engineering discipline at institutional scale.
In 2024, after 19 years, this chapter closed. We’d fielded M&A interest for years, and with my family now rooted in the US, it was time to let someone closer to the local market steer AMTI forward. The company was acquired.
Today, I sit on AMTI’s board as Technical and Security Advisor — offering the same instincts I built over two decades to a company that now serves more than 15,000 unique customers across thousands of miles of fiber-optic metro mesh.
My morning coffee alternates between a Hario pour-over — slow, deliberate, maybe I’ll share some recipes here someday — and a Jura Z10 when efficiency wins. That duality runs through everything I do.
Everything I built was built under discipline and deep immersion across a wide breadth of domains. I learned early that full accountability for execution isn’t optional. When things go south, pointing fingers is useless — you keep things running the best way you can. You don’t outsource your operational failures to the market. The market is sovereign. You adapt to it, or it replaces you.
This is how I approach URE — based on 20 years of building, from concrete to kernel. I don’t like to brag; in fact, I find it distasteful. But I believe I’ve seen firsthand more ways that infrastructure can fail than most people in this field. Lightning strikes. A gecko electrocuted inside a switchgear, killing the automatic transfer. Designing parallel cogeneration synchronization for GPU transient workloads. Discovering firmware bugs in production and waiting for vendors to push fixes while my team sat in the cold aisle, keeping things alive by hand. I don’t trust all-green dashboards. If everything looks fine, something isn’t being measured. Data centers and distributed systems are dynamic living organisms, and I trust the laws of physics.
Twenty years of that teaches you something that can’t be compressed into frameworks: resilience isn’t a feature you add. It’s a property that emerges — or doesn’t — from how power, thermal, network, compute, and human judgment interact under pressure.
Writing is a borrowed passion I’ve only recently recovered — but telling stories I’ve actually lived turns out to be fluid and straightforward. I believe you shouldn’t write about what you haven’t earned the right to speak on. Everything here is based on my own experience and perception. Nothing is perfect — opinions, methods, and conclusions will evolve.
What won’t change is the foundation: physics doesn’t negotiate. You can’t dodge it — not in a thermally soaked hot aisle, not fighting harmonics with fast-triggered capacitors, not managing GPU throttling 70 hours into a training job over an InfiniBand fabric.
So, what do I do?
I do what I love. Every day, I walk into my office for research and development — and after two decades, it’s still a privilege to stay faithful to what I’ve always been: someone who needs to know what’s under the hood. I’ve been that way since I was six, when my parents stopped gifting me anything electronic — because it would be disassembled within hours. I needed to know how things worked.
I hold two provisional patents filed in recent months:
Predictive Power Conditioning for High-Transient Workloads — kernel-level signaling that talks to the BMS in microseconds, not milliseconds. When a GPU pod steps from idle to full load, legacy protection systems react too late. The capacitor banks fire after the damage starts. This system detects the transient before the breaker trips — the kind of early warning that could have prevented the electrical fires we’ve seen in data center power rooms over the past year.
Gate-Conditioned Cloud Provisioning Based on Accountability — infrastructure spend is an operational resilience decision, not a finance problem. This framework gates every provisioning action through explainable accountability — who approved it, what it actually costs, and what breaks if it disappears. When cloud spend is ungoverned, you don’t have a billing problem. You have a resilience problem: resources nobody owns, dependencies nobody mapped, and blast radius nobody scoped.
I was taking apart radios at five. Forty years later, I’m still taking things apart — power systems, thermal loops, cost structures, vendor assumptions. The only difference is that now I write down what I find.
Contact
- Email: s@ure.us
- LinkedIn: linkedin.com/in/schotten
Stefano Schotten