Last Friday, the White House ordered every federal agency to stop using Anthropic products within six months. The Defense Secretary designated the company a “supply chain risk to national security” — a label normally reserved for foreign adversaries like Huawei or Kaspersky.

Anthropic’s crime: they refused to remove two safety guardrails from Claude before deploying it on classified Pentagon networks. No AI for mass domestic surveillance of American citizens. No fully autonomous weapons without human oversight.

That’s it. Two red lines. And for holding them, Anthropic is now being treated like a hostile foreign actor by the government of the country where it was founded, where it employs thousands of people, and where it just committed $50 billion to build data centers on American soil.

The Timeline

The sequence matters.

The Pentagon wanted Claude on classified networks. A $200 million contract. Anthropic engaged in good faith — months of negotiation where the company showed flexibility on scope, deployment architecture, and operational constraints. The sticking point was never about whether Claude would serve defense. It was about what the contract language would permit without oversight.

On Tuesday, February 24, the Defense Secretary met Dario Amodei and delivered an ultimatum: comply by Friday at 5:01 PM or face consequences. Amodei’s response, published Thursday: the company “cannot in good conscience” accept the terms. The final contract language, Anthropic said, “was paired with legalese that would allow those safeguards to be disregarded at will.”

The contradiction in the government’s posture was absurd on its face. The supply chain risk designation labels Anthropic as a threat. The simultaneous invocation of the Defense Production Act labels Claude as essential to national security. You can’t have both. Either the technology is dangerous and should be contained, or it’s essential and should be secured. Designating the same company as both isn’t policy — it’s coercion with a legal veneer.

Friday arrived. The deadline passed. The executive order dropped. Anthropic announced it will challenge the designation in court.

Why I Read This as a CISSP

There is no Congressional legislation governing AI in national security contexts. None. The DoD sets its own policy limits on autonomous weapons and can change them unilaterally. Anthropic was being asked to trust policy promises that carry zero legal weight.

When you deal with operational security, compliance, and data policy in sovereign, air-gapped, and highly restricted environments — and I did for twenty years at AMTI — you learn one thing fast: what’s not really well written becomes no man’s land, and leverage against the weaker party.

The contract was $200 million against Anthropic’s $14 billion in annual revenue — a rounding error. But the supply chain risk designation isn’t a contract dispute. It weaponizes the entire defense contractor ecosystem against a single domestic company. Every firm that does business with the DoD now has a reason to avoid Anthropic.

That’s not market competition. That’s economic warfare against a company building on your own soil.

Apple Drew This Line Before

The last time a company showed this kind of courage was February 2016.

After the December 2015 San Bernardino terrorist attack — 14 killed, 22 injured — the FBI demanded Apple create software to unlock the shooter’s iPhone 5C. Tim Cook refused. The required software would function as a master key — once created, it could be turned against millions of users.

The FBI ultimately found a third-party to break in. They paid roughly $900,000. They found no actionable intelligence on the phone.

The precedent Apple fought for was preserved: companies cannot be compelled to create backdoors in their own security architecture. Both companies faced enormous government pressure to weaken their safety posture. Both were accused of obstructing national security. Both refused. Apple was vindicated.

History has a pattern here.

I Have the Receipts — Brazil Tried This

I lived this story.

In 1984, Brazil’s government passed the Lei da Informática — the Reserva de Mercado — a policy that blocked foreign computer imports and forced any technology company operating in Brazil to share its intellectual property with the government. The goal was technological sovereignty.

The result was the opposite. International companies left. Academic partnerships with foreign institutions collapsed. R&D dried up overnight — nobody was going to develop anything in a market where the government could claim your work. The free market didn’t just shrink. It was destroyed in a few years. What remained was a closed ecosystem of domestic clones running on reverse-engineered hardware, falling further behind the rest of the world with every passing quarter.

When Brazil finally reopened the market in the early 1990s, it was too late. Foreign companies came back with a decade of advancement that domestic players couldn’t match. The collapse was near-total. Virtually every national technology company was wiped out. The few survivors were absorbed by banks — Cobra by Banco do Brasil, Itautec by Itaú — and even those eventually stopped making computers.

Thirty-five years later, the damage is permanent. Brazil has zero domestic chip fabrication. Zero global hardware brands. Zero frontier AI companies. The Reserva de Mercado didn’t protect the industry. It buried it.

That’s what happens when governments force their hand on technology. Whether through protectionism (Brazil 1984), compelled backdoors (FBI 2016), or coerced removal of safety guardrails (Pentagon 2026) — the pattern is the same.

The Nuance — “Everybody Spies” and Why This Is Different

I need to be honest about something.

Days before the Pentagon standoff, Anthropic accused Chinese AI labs — DeepSeek, Moonshot AI, MiniMax — of industrial-scale distillation attacks: 24,000 fake accounts, over 16 million exchanges with Claude to extract capabilities for competing models.

I wrote about this in Everybody Spies, and I wasn’t particularly kind. Anthropic settled a $1.5 billion copyright lawsuit in September 2025 over training data. The company trained on the open internet and then cried foul when others learned from its outputs. The Keynesians will forgive me, but in a free market, that’s the game. I said as much.

But the Pentagon situation is categorically different.

The distillation debate is about market competition and intellectual property — ugly, maybe, but that’s the arena. Companies compete, companies litigate, companies adapt. The Pentagon standoff is about sovereign domestic coercion. A government punishing a domestic company for maintaining safety principles that the company itself developed, published, and stood behind consistently.

We’re not in the market arena anymore. We’re watching the referee pull a gun on a player who refused to throw the match.

The Friday Night Special

What happened next tells you everything.

Hours after the White House blacklisted Anthropic on Friday, OpenAI’s CEO told employees at an all-hands that a deal with the Pentagon was emerging. Late Friday night, he announced on X that OpenAI had signed.

The kicker: OpenAI’s deal includes the same safety principles Anthropic had asked for. Prohibitions on domestic mass surveillance. Human responsibility for use of force. The same red lines. CNN couldn’t identify what was actually different between what OpenAI got and what Anthropic wanted.

Eleven OpenAI employees signed an open letter the same night: “The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused. They’re trying to divide each company with fear that the other will give in.”

Anthropic drew the line. Anthropic took the bullet. OpenAI walked in behind them and got the same terms.

That’s not negotiation savvy. That’s picking up the ball after someone else got tackled.

Where I Stand

Anthropic committed $50 billion to build data centers on American soil — custom facilities with Fluidstack in Texas and New York, with more sites to come. 800 permanent jobs. 2,400 construction jobs. Sites coming online throughout 2026. On top of Amazon’s dedicated $11 billion campus in Indiana, already operational. On top of a Google Cloud deal worth tens of billions for up to one million TPUs. They’re not offshoring. They’re not hedging. They’re building the compute backbone of American AI leadership on American dirt.

And the government just designated them a supply chain risk.

I’m a CISSP. I built and ran a cloud infrastructure company for twenty years. Anthropic asked for what every competent security professional would ask for: put the guardrails in the contract, not in the side conversation. They got blacklisted for it.

Brazil 1984: government controlled technology — the industry was destroyed and never recovered. Apple 2016: refused the FBI’s backdoor — vindicated by history. Anthropic 2026: refused to remove safety guardrails — blacklisted, but standing.

Governments don’t build ecosystems. They bury them.

I stand with Anthropic.


Stefano Schotten is a Principal Infrastructure Architect, CISSP, and the founder of URE. The opinions expressed in this article are solely those of the author. Neither the author nor URE has any commercial, employment, or advisory relationship with Anthropic, OpenAI, the US Department of Defense, or any other entity referenced in this piece. All claims attributed to specific organizations or individuals are sourced from their official public statements or credible news reporting.