This is the second of a three-part series based on a real-world engagement: a company that scaled from $40M to $1B in annual revenue in just five years, and the security program that had to grow with it.

This is a story about building high-performance operating systems where security, standards, architecture, and performance act as enablers rather than constraints.

From Trust to Reliance

Trust is like a marble jar. Every act of presence, every moment you show up and deliver, every time you say “I’m here” and mean it — a marble drops in. No single marble fills the jar. But over time, the weight becomes undeniable. I heard this image once and it never left me, because it maps perfectly to how credibility actually works: not in grand gestures, but in accumulation.

But here’s the thing nobody warns you about: a full marble jar has a gravitational pull. Once people trust you, they lean on you. And if you’re not careful, you become the load-bearing wall in a building that was supposed to have redundant supports.

That’s exactly what happened to us.

The Weight of Being Available

Hyper-growth is not for everybody. Look at any job posting and watch how many seasoned professionals recoil at the word “startup.” They want the structured procedures, the clear escalation paths, and the comfort of saying, “Well, there was no SOP for that, so I didn’t act — but I added a note to the ticket.” That mindset doesn’t survive contact with a company doubling every year.

I was wired differently. I was available. I picked up the phone. I solved problems across boundaries. And because I did, the CFO started calling me directly. Then the CEO. Not through the account manager, not through the project channel — directly. For everything.

That felt like trust. And it was. But it was also a warning sign I didn’t recognize yet.

Hero Mode

The business was under sustained DDoS — not the nuisance kind, the existential kind. My team was sharp on security, but this was a different league. Our 10 Gbps firewalls were pegging their processors. We had to work directly with upstream carriers, coordinating BGP blackhole routes to shed the traffic before it ever reached our edge. And I was the one building the automation layer to make that repeatable — because that’s what I do. Blame the sin, not the sinner: I grew up doing things like this.

Small side note: any vendor watching this would sharpen the knife and propose the obvious solution — “Let’s scale up.” A $1M upgrade, 90-day lead time, and someone on their side would celebrate that number at the end of the quarter. It’s like walking into a fancy Italian restaurant, ordering the most expensive entrée — pasta, naturally — and the moment it arrives, you realize you’re celiac. You tell the waiter the story, and the solution is: “I can make you another plate.” Double the charge. And still pasta.

Scaling up the firewall wasn’t the answer. The traffic needed to be steered before it ever reached the firewall — upstream, at the routing layer. That’s what BGP is for. And BGP is beautiful. It’s a canvas where you can paint routes across the Internet wasteland — communities and blackholes are the brushstrokes. There’s a reason traffic engineers exist as a discipline, and I genuinely love that craft. You’ll never see a firewall vendor offer BGP-level traffic steering as an embedded feature. Why would they?

That could have been a fancy, proud-to-brag story about an expensive solution. I’ve seen people boast, “I got budget approval in less than 3 minutes,” and a few weeks later, they’re traveling to vendor-sponsored summits. Mitigating DDoS via BGP doesn’t earn medals. We didn’t have 90 days. We didn’t even have 90 minutes. So we built it ourselves, with what we had, under fire.

Nothing wrong with any of it. We were well paid, the work was real, and the aggression was relentless — we were just leaning into the weight. It was a cool job. We held that account for two years. Became the subject matter experts in security. Life was good.

And then I caught myself.

The Tribal Knowledge Problem

I was configuring advanced BGP routing policies in production — live, under pressure, on critical infrastructure — and there was no SOP. No runbook. No second pair of eyes that could have done what I was doing. The automation I’d built was good, but the logic behind it lived in my head. If I got hit by a bus — or, more realistically, if I got sick for a week — the whole mitigation capability degraded to “call Stefano.”

That’s when hero mode stopped feeling heroic and started looking like what it actually was: tribal knowledge embedded in my own operation. The same disease I’d diagnosed in the client’s engineering team during Part 1 — the fog, the single points of failure, the organic accretion of knowledge that nobody documented because there was never time — was living in my own shop.

I was the single point of failure I was supposed to be eliminating.

The marbles in the jar weren’t just trust anymore. They were dependency. And dependency without structure is fragility dressed up as reliability.

The Paradox

Here’s the paradox of trust in hyper-growth: the better you perform, the more you get pulled into everything. The more you get pulled into everything, the less sustainable the whole system becomes. You earn credibility by being the person who shows up — and then one day you realize that “showing up” has become a bottleneck, not a strength.

Recognizing it changed how I operated from then on. Every time I fixed something under pressure, I started asking: Can someone else do this tomorrow without calling me? If the answer was no, the fix wasn’t finished.

That shift — from hero to system builder — is what Part 1 was about at the infrastructure layer. But at the SRE and operations layer, it gets personal. Because now you’re not just standardizing other people’s tribal knowledge. You’re standardizing your own.

The Call

A personal note — my wife always says she doesn’t trust we’re actually going on vacation until the night before. Can’t blame her. Pre-vacation weeks are chaotic by nature — and if you’re wired the way I am, they’re also great opportunities to kill single points of failure and transfer tribal knowledge so operations doesn’t need to interrupt family time.

It was a Thursday. The CEO called me himself. “Can you stop by for a coffee?” Sure, when? “I’m available now.”

He had the two most prestigious SEO agencies in Brazil on retainer — more than 30 people between them working the problem. And still, nothing moved. He’d reached the point where he didn’t know who he could trust anymore.

So he called the security guy.

I gave him no warranties. But I told him I’d take a look at his application from the outside.

The Action

The next day, Friday — the day before vacation — I called him at 11:00. I had a plan.

This was the first time I was crossing domains with this client. I was the security guy. Paid to keep data safe, applications running, bad actors out, and users in. Simple. Now I was looking at page load times and JavaScript bundles.

He was in a rush — so was I, but he didn’t know that — and we had maybe twenty minutes. How do you explain to a CEO that his landing page is loading 6 MB of JavaScript and 9 MB of media, and why that’s killing his business, in twenty minutes? You don’t.

It was drinking from a firehose for him. So I watered it down: I told him I was heading out on vacation and couldn’t oversee this in the timeline he needed.

I could see the frustration in his eyes — honest, completely transparent. He’d been driven into so many bad decisions before, and the business was sitting on a once-in-a-lifetime opportunity. If the saying “if it’s not on Google, it doesn’t exist” were literal, I would have been in a ghost house talking to a ghost. His business had no organic footprint. Every single customer had to type the URL directly into their browser.

My mindset is hardwired to evaluate risk. I told him we could try something, given the very short timeframe. His answer: “Well, since you’re going on vacation, I’m pretty sure you won’t put me at risk. You have the green light to do whatever you need.” He paused. “I trust your work and your judgment.”

There it is. The marble jar, overflowing.

The Execution

For the more technical minds — we’d already diagnosed that his application had a fundamental issue with content delivery and caching. Every single access was a direct hit to the application servers. There was no cache layer. None.

Our planned solution was straightforward: cache static assets on a CDN, apply inline image optimization at the reverse proxy, and adjust JavaScript cache policies so content could be delivered from the edge rather than hammering the origin servers.

The issue? They didn’t have a CDN account. It wasn’t in our scope. Wasn’t in the planning. Wasn’t in the budget.

So we used the green light. I pulled out my own credit card, created a Cloudflare account on his behalf — at the time, they had six PoPs in Brazil — and the team got to work. Over a few hours, his static objects were finally distributed across the edge. We finished around 10 PM. Everyone stayed.

Before leaving, I wrote him an email — in his language, not ours. No jargon, no caveats, no invoices. We didn’t mention billing, scope changes, or the credit card. That wasn’t his problem to carry. The message was simple: we had applied a non-invasive patch. His development and operations teams were untouched. He would see the first hard numbers by Monday evening, and our team would walk his people through everything in person that week.

Then I went home and packed my bags.

What Happened on Monday

Honestly, I don’t remember if I was standing in line at Magic Kingdom or Epcot with my wife and kids. But I was there. Phone off. Just a follow-up from my ops manager: “Relax. I’m here. 100% uptime, everything’s fine. People are saying the application is faster.” The CEO’s reply came mid-morning, simple and short: “Enjoy your time. Thanks. We’ll talk when you come back.”

The meeting after vacation was a breeze. When you show and tell performance improved by 60% without touching a single development hot-wire — that’s the meeting you want to be in. I remember one of AMTI’s managers asking me afterward: “How did you explain turning ‘it’s not our problem’ into such a wonderful outcome?” I told him, “We just need to listen.”

Blurring the Lines

Part 1 was about earning trust from the outside in — stabilizing infrastructure, removing the scapegoat, letting security emerge from the work. Part 2 is about what happens after the trust is earned: the lines blur.

Security bleeds into operations. Operations bleeds into performance. Performance bleeds into business strategy. In hyper-growth, the neat boxes that job descriptions and org charts promise simply don’t hold. The people who thrive in these environments are the ones who follow the problem wherever it leads — across domains, across comfort zones, across the boundary between “my scope” and “not my problem.”

But — and this is the lesson that took me the longest to learn — following the problem is not the same as being the only one who can. The CDN workaround worked. The DDoS mitigation worked. The trust was real. But every one of those wins carried a hidden cost: it reinforced dependence on a single person rather than building a system that could sustain those outcomes without me.

The real work of Part 2 wasn’t the DDoS automation or the Cloudflare account on my credit card. The real work was recognizing that hero mode is a phase, not a strategy. It’s the spark that earns you the right to lead — but if you stay there, you become the very fragility you set out to eliminate.

The next challenge was encoding all of this — the trust, the cross-domain instincts, the operational knowledge — into something that could survive without any single person holding it together. Something transparent. Something the client could own.

That’s the gift. And it needed wrapping.


Previous: Part 1: “Earning Credibility Before You’ve Earned Authority”

Part 3: “Wrapping the Gift — Transparency and Agency” will be published soon.