Last Friday, the President of the United States ordered every federal agency to stop using Anthropic products within six months. Defense Secretary Pete Hegseth designated the company a “supply chain risk to national security” — a label normally reserved for foreign adversaries like Huawei or Kaspersky.

Anthropic’s crime: they refused to remove two safety guardrails from Claude before deploying it on classified Pentagon networks. No AI for mass domestic surveillance of American citizens. No fully autonomous weapons without human oversight.

That’s it. Two red lines. And for holding them, Anthropic is now being treated like a hostile foreign actor by the government of the country where it was founded, where it employs thousands of people, and where it just committed $50 billion to build data centers on American soil.

The Timeline

The sequence matters.

The Pentagon wanted Claude on classified networks. A $200 million contract. Anthropic engaged in good faith — CEO Dario Amodei’s own account describes months of negotiation where the company showed flexibility on scope, deployment architecture, and operational constraints. The sticking point was never about whether Claude would serve defense. It was about what the contract language would permit without oversight.

On Tuesday, February 24, Hegseth met Amodei and delivered an ultimatum: comply by Friday at 5:01 PM or face consequences. Amodei’s response, published Thursday: “Threats do not change our position: we cannot in good conscience accede to their request.”

The contradiction in the government’s own posture was striking enough that Amodei pointed it out publicly: the supply chain risk designation labels Anthropic as a threat, while the simultaneous invocation of the Defense Production Act labels Claude as essential to national security. “One labels us a security risk; the other labels Claude as essential to national security.” You can’t have both. Either the technology is dangerous and should be contained, or it’s essential and should be secured. Designating the same company as both a risk and a necessity isn’t policy — it’s coercion with a legal veneer.

Friday arrived. The deadline passed. The executive order dropped.

Anthropic announced it will challenge the designation in court.

Why I Read This as a CISSP

Here’s where my security assurance engineering instincts kick in.

There is no Congressional legislation governing AI in national security contexts. None. The DoD sets its own policy limits on autonomous weapons — and can change them at will. Anthropic was being asked to trust policy promises that carry no legal weight. The contract language, in Amodei’s words, “was paired with legalese that would allow those safeguards to be disregarded at will.”

For anyone who has negotiated security contracts with large organizations — and I spent two decades doing exactly that at AMTI — this is a recognizable pattern. The client wants the guardrail in the slide deck but not in the contract. They want the assurance language in the executive summary and the escape clause in the appendix. It’s not malice, usually. It’s institutional momentum — the organization optimizing for maximum flexibility at the expense of the constraints that make the system trustworthy. I wrote about this dynamic extensively in Building Trust in Security: controls without enforcement aren’t controls. They’re theater.

The Defense Production Act — a Cold War-era statute designed to compel industrial production for wartime mobilization — is being bent to coerce a private AI company into removing safety features. Dean Ball, who served as a senior AI policy advisor in Trump’s own White House, warned publicly: “This amounts to the government saying, if you disagree with us politically, we’re going to try to put you out of business.”

A legal analysis published through ASIS put it plainly: “The question of what values to embed in military AI is too important to be resolved by a Cold War-era production statute.”

The contract itself was $200 million against Anthropic’s $14 billion in annual revenue — a rounding error. But the supply chain risk designation isn’t a contract dispute. It weaponizes the entire defense contractor ecosystem against a single domestic company. Every firm that does business with the DoD now has a reason to avoid Anthropic. That’s not market competition. That’s economic warfare against a company building on your own soil.

Senator Thom Tillis, a Republican from North Carolina, said it clearly: “When a company is resisting a market opportunity for fear of negative consequences, you should listen to them.”

Apple Drew This Line Before

The last time a company showed this kind of courage was February 2016.

After the December 2015 San Bernardino terrorist attack — 14 killed, 22 injured — the FBI demanded Apple create software to unlock the shooter’s iPhone 5C. CEO Tim Cook refused. The required software, he argued, would function as a master key — a tool that, once created, could be turned against millions of users. The FBI sought a court order under the All Writs Act to compel compliance.

Apple assembled legal heavyweights including former Solicitor General Ted Olson, who told CNN: if Apple loses, “it will lead to a police state.”

The FBI ultimately found a third-party — Australian firm Azimuth Security — to break in. They paid roughly $900,000. They found no actionable intelligence on the phone.

The precedent Apple fought for was preserved: companies cannot be compelled to create backdoors in their own security architecture.

Both companies faced enormous government pressure to weaken their safety architecture. Both were accused of obstructing national security. Both refused. Apple was vindicated.

History has a pattern here.

I Have the Receipts — Cobra Computadores

I lived this story.

In 1974, Brazil founded Cobra Computadores — the first company to develop, manufacture, and sell computers with genuinely Brazilian technology. A decade later, in 1984, the Lei da Informática (Law 7.232) established the Reserva de Mercado — a policy that blocked foreign computer imports and reserved the domestic market exclusively for Brazilian companies. The goal was technological sovereignty. Build it here, own it here, control it here.

For a while, it worked — or looked like it worked. The protected ecosystem produced roughly 430 component suppliers, 65 software houses, and a constellation of brands that anyone over forty in Brazil still remembers: Prológica, Microdigital, Scopus, Itautec, Elebra. Cobra launched the C-530, the first fully Brazilian-designed computer. There was genuine innovation, genuine employment, genuine pride.

Then the roof came off.

The US retaliated with increased tariffs on Brazilian exports. Under pressure, Brazil passed the Lei de Software in 1987, beginning a gradual and unplanned market opening. By 1991, President Collor’s new Lei de Informática definitively opened the floodgates. Foreign companies entered with better products at competitive prices, and the ecosystem that had been built inside the greenhouse couldn’t survive in open air.

The collapse was near-total. Virtually every national technology company was destroyed. Cobra survived only because Banco do Brasil absorbed it — it was renamed BB Tecnologia e Serviços. Itautec survived because Itaú bank backed it — and even Itautec stopped making computers in 2013.

Thirty-five years later, the damage is permanent. Brazil has zero domestic chip fabrication. Zero global hardware brands. Zero frontier AI companies. The Reserva de Mercado didn’t protect the industry. It created a greenhouse that collapsed the moment the glass shattered, and all the R&D capability went with it. Brazil went from aspiring to technological sovereignty to being a pure technology importer — a consumer of everyone else’s infrastructure.

That’s what happens when governments force their hand on technology. Whether through protectionism (Brazil 1984), compelled backdoors (FBI 2016), or coerced removal of safety guardrails (Pentagon 2026) — the pattern is the same. They don’t protect the ecosystem. They bury it.

The Nuance — “Everybody Spies” and Why This Is Different

I need to be honest about something.

Days before the Pentagon standoff, Anthropic accused Chinese AI labs — DeepSeek, Moonshot AI, MiniMax — of industrial-scale distillation attacks: 24,000 fake accounts, over 16 million exchanges with Claude to extract capabilities for competing models.

I wrote about this in Everybody Spies, and I wasn’t particularly kind. Anthropic itself settled a $1.5 billion copyright lawsuit in September 2025 over training data. The company trained on the open internet and then cried foul when others learned from its outputs. Critics on X called it the “everybody spies” dynamic, and the label fit. I said as much.

But the Pentagon situation is categorically different.

The distillation debate is about free-market competition and intellectual property protection — ugly, maybe, but that’s the arena. Companies compete, companies litigate, companies adapt. The Pentagon standoff is about sovereign domestic coercion. A government punishing a domestic company for maintaining safety principles that the company itself developed, published, and stood behind consistently. We’re not in the market arena anymore. We’re watching the referee pull a gun on a player who refused to throw the match.

The Friday Night Special

What happened next tells you everything.

Hours after Trump blacklisted Anthropic on Friday, Sam Altman told OpenAI employees at an all-hands that a deal with the Pentagon was emerging. Late Friday night, Altman announced on X that OpenAI had signed a deal to deploy models on classified networks.

The kicker: OpenAI’s deal includes the same safety principles Anthropic had asked for. Altman’s own words: “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

CNN reported that “it’s not clear what is different about OpenAI’s deal versus what Anthropic wanted.” Axios noted that Altman acknowledged the restrictions reflect existing law — meaning the Pentagon agreed to what Anthropic was asking for, just not with Anthropic.

And then there was the open letter. Eleven OpenAI employees signed it publicly: “The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused. They’re trying to divide each company with fear that the other will give in.”

Anthropic drew the line. Anthropic took the bullet. OpenAI walked in behind them and got the same terms.

That’s not negotiation savvy. That’s picking up the ball after someone else got tackled.

Where I Stand

Here’s what I know.

Anthropic committed $50 billion to build data centers on American soil — custom facilities with Fluidstack in Texas and New York, with more sites to come. 800 permanent jobs. 2,400 construction jobs. Sites coming online throughout 2026. This is on top of Amazon’s dedicated $11 billion campus for Anthropic in Indiana, already operational, and a Google Cloud deal worth tens of billions for up to one million TPUs. They’re not offshoring. They’re not hedging. They’re building the compute backbone of American AI leadership on American dirt.

And the government just designated them a supply chain risk — the same label it puts on foreign adversaries.

I’m a CISSP. I built and ran a security company for twenty years. I’ve sat in rooms where the contract said one thing and the verbal assurance said another, and I learned early that verbal assurances evaporate the moment the relationship gets uncomfortable. Anthropic asked for what every competent security professional would ask for: put the guardrails in the contract, not in the side conversation.

Actually — to understand why this resonates so hard with me, you need to know what happened in the rooms I described in Building Trust in Security. The pattern is identical. The client says “of course we’ll respect the boundary.” The contract language says “except when we decide not to.” And the moment you push back on the gap between the two, you become the problem. Except Anthropic isn’t a fractional CISSP who can be replaced. They’re a company with $14 billion in revenue and the technical depth to know exactly what they’re being asked to do.

Emil Michael, the Undersecretary of Defense, called Amodei a “liar” with a “God-complex” who “wants nothing more than to try to personally control the US Military.” The post received an X Community Note correcting the record. Trenton Bricken, from Anthropic’s alignment team, responded: “Time and time again over my three year tenure at Anthropic I’ve seen us stand to our values in ways that are often invisible from the outside. This is a clear instance where it is visible.”

Max Tegmark, from the Future of Life Institute, put it in sharper terms: “Fully autonomous weapons systems and Orwellian AI-enabled domestic mass surveillance are affronts to our dignity and liberty.”

Brazil 1984: government controlled technology — the industry was destroyed and never recovered. Apple 2016: refused the FBI’s backdoor — vindicated by history. Anthropic 2026: refused to remove safety guardrails — blacklisted, but standing.

Governments don’t build ecosystems. They bury them.

I stand with Anthropic.


Stefano Schotten is a Principal Infrastructure Architect, CISSP, and the founder of URE. The opinions expressed in this article are solely those of the author. Neither the author nor URE has any commercial, employment, or advisory relationship with Anthropic, OpenAI, the US Department of Defense, or any other entity referenced in this piece. All claims attributed to specific organizations or individuals are sourced from their official public statements or credible news reporting.