In Brazil, when advising a customer on endpoint security, there was a mental model we never said out loud. The technical discussion would cover detection rates, false positives, memory footprint — the usual. But underneath it ran a question that never made it into the RFP: who do you want knowing what you’re doing? Russians or Americans?
Kaspersky was the default for most of the market — and not because of ideology. Norton and Symantec had spent years earning their reputation for turning Windows machines into molasses, and McAfee was McAfee. Kaspersky worked. It was lighter, faster, cheaper. The fact that its telemetry flowed to Moscow rather than Langley was a feature, not a bug, depending on which side of the table you sat on.
Nobody framed it that way in polished vendor evaluations. But every security professional in the room understood the arithmetic. You weren’t choosing whether someone had visibility into your environment. You were choosing who.
That was fifteen years ago. The question hasn’t changed.
The Monopoly Record
Intel held over 80% of the x86 processor market for more than two decades — 75 to 85% unit share from 1999 onward, per the FTC’s own complaint. Microsoft Windows commanded 90%+ of desktop operating systems for a decade straight, a finding of fact in the 2000 antitrust case. Google owns north of 90% of web search. YouTube is the second-largest search engine on Earth, and it’s owned by the first. iOS and Android share mobile between them like two landlords splitting a city — pick your flavor, but you’re paying rent to one of them.
These aren’t market positions. They’re territorial occupations.
And the pattern repeats everywhere, with different flags. Before the Ukraine war, Yandex owned Russian cyberspace — search, maps, ride-hailing, everything. If you were digital in Russia, you were in Yandex’s territory. China built an entire parallel internet behind the Great Firewall: WeChat, Baidu, Alibaba, Douyin. I spent enough time in that ecosystem as an Android user to find it genuinely disorienting — not because the technology was alien, but because the absence of Google was. You don’t realize how embedded you are in one empire’s infrastructure until you step into another’s.
Every major digital power has built its own sphere. The only thing that changes is the flag on the server rack.
The Consent Decree That Created Google
Here’s a piece of history that doesn’t get connected often enough.
In May 1998, the US Department of Justice filed its antitrust case against Microsoft. The allegation: Microsoft used its Windows monopoly to crush Netscape by bundling Internet Explorer and strangling distribution. The court agreed. A consent decree was imposed, restricting Microsoft’s ability to engage in exclusionary practices. That decree lasted until May 12, 2011 — roughly ten years of Microsoft operating with one hand tied behind its back.
Four months after the DOJ filed that suit, two Stanford PhD students incorporated a search company in a garage. The company was Google.
This isn’t conspiracy. It’s timing. Microsoft, the most dominant technology company on the planet — a company that had already demonstrated the appetite and the capability to extend into every adjacent market it touched — was legally constrained from aggressive competitive moves during the exact decade when the internet’s center of gravity shifted from the browser to the search bar.
Would Microsoft have built a competing search engine without the decree? Probably. Would they have used their Windows distribution monopoly to bundle it, preinstall it, default it on every machine shipped? Almost certainly. Would Google have grown to 90%+ market share under those conditions?
I have my hypothesis. You have yours.
Now rewind and apply a different lens. Imagine it’s 2002. Yahoo dominates web search. A small, fast-growing competitor called Google is eating their market share by building better relevance algorithms — partly by observing how users interact with existing search results, learning from the territory Yahoo had already mapped.
What if Yahoo had gone to the Senate and testified: “Google is systematically distilling our proprietary search intelligence. Their algorithms are derivative of patterns only visible through our infrastructure. This represents a strategic threat to American digital sovereignty.”
Sounds absurd, right? Yahoo had the market, the data, the infrastructure investment. Google was learning from Yahoo’s outputs — not through espionage, but through the simple act of building a better product in a shared ecosystem. The line between “competitive intelligence” and “capability extraction” has always depended on who’s drawing it, and when.
Change the nouns and it’s on the front page of every tech publication this week.
The Land Grab
NVIDIA — north of $4.5 trillion in market capitalization at the time — acquired Groq’s LPU inference technology for $20 billion in December 2025. The deal was structured as a licensing agreement to navigate antitrust scrutiny, but the substance was clear: NVIDIA hired roughly 90% of Groq’s workforce, took the technology, and neutralized a competitor. Jensen Huang’s largest deal ever, and at that valuation, the price barely registered on the balance sheet.
Then there’s Nebius. A company trading on Nasdaq with a market capitalization that spiked north of $20 billion after it signed a $17.4 billion, five-year GPU infrastructure contract with Microsoft. The Vineland, New Jersey data center — where a significant portion of that contract is supposed to be fulfilled — is still in buildout. The market isn’t valuing what Nebius has built. It’s valuing a future contract. A promise of compute. A few years ago, we’d have called that a letter of intent, shaken hands, and hoped for the best. Today it commands a valuation exceeding the contract that underpins it.
I remember the Trump-China tariff headlines cycling through Bloomberg terminals in 2019 — the “trade war” that was supposed to reshape global supply chains before COVID reshuffled everything anyway. The stakes felt enormous at the time. They were rounding errors compared to what’s being wagered now. This is the grab-the-land era. Not technology bets. Territorial claims. Stakes planted in silicon and power purchase agreements, valued on the assumption that whoever controls the GPU supply controls the next decade of compute.
The Anthropic Disclosure
On February 23, 2026, Anthropic published a detailed disclosure describing what it characterized as “industrial-scale campaigns” by three AI laboratories — DeepSeek, Moonshot AI, and MiniMax — to extract Claude’s capabilities through distillation. The numbers in Anthropic’s blog post were specific: over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of Anthropic’s terms of service and regional access restrictions. Anthropic reported that these labs targeted Claude’s “most differentiated capabilities: agentic reasoning, tool use, and coding.”
According to the disclosure, MiniMax ran the largest operation at over 13 million exchanges targeting agentic coding and tool orchestration. Anthropic stated it detected this campaign while it was still active and observed MiniMax pivot within 24 hours of a new Claude model release, redirecting nearly half its traffic to capture capabilities from the latest system. The company attributed each campaign through IP address correlation, request metadata, and infrastructure indicators, and described the distributed access networks as “hydra cluster” architectures — sprawling networks of fraudulent accounts that distribute traffic to avoid single points of failure.
Anthropic’s framing connected the disclosure directly to the export control debate: “Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation.” The company also stated that models built through illicit distillation “are unlikely to retain those safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely.”
The disclosure was published alongside similar announcements from OpenAI and Google, during a week when US chip export policy to China dominated the news cycle.
Two weeks before the disclosure, Mrinank Sharma, who led Anthropic’s Safeguards Research Team, resigned. His letter, posted publicly on February 9, was candid: “Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions.” Other researchers departed around the same period — Harsh Mehta and Behnam Neyshabur among them — part of what CNN described as a broader wave of departures across the AI industry, with multiple researchers raising concerns about the pace of development and the tension between safety commitments and commercial pressures.
I’m not questioning whether the distillation happened — Anthropic’s forensics are detailed and technically credible. I’m questioning the framing.
The safety researchers walk out the door. Two weeks later, the company publishes a national security disclosure timed to a news cycle about chip exports. That’s not conspiracy — that’s pattern recognition. Lockheed Martin has been running this play for decades: demonstrate strategic value, identify a foreign threat, align your commercial interests with the national interest. The threats can be real and the alignment genuine. But if you can’t see the pattern, you’re not paying attention.
The Real Pattern
Every espionage thriller ever made hammers the line that the CIA can’t operate on domestic soil. The Patriot Act was sold on the premise that American citizens wouldn’t be subject to mass digital surveillance. Then Snowden showed the world how that worked out in practice — including the part where US intelligence had direct access to European data flowing through American platforms.
GDPR wasn’t born from European privacy idealism. It was born from that revelation.
Microsoft, Google, Meta, Apple — they’ve been gathering data from foreign citizens for decades. TikTok gets threatened with a ban because a foreign adversary might access American user data. Huawei is banned from US telecom infrastructure because of potential backdoors. These concerns are legitimate. But let’s not pretend the flow of surveillance has ever been one-directional. I wrote about this asymmetry in The Entropy of Sovereign AI, and the gap between rhetoric and reality has only widened since.
Now Europe is spending over €10 billion on sovereign cloud infrastructure in 2026 alone, building AI Gigafactories across fifteen EuroHPC hubs, and pushing through the Digital Networks Act and the Cloud and AI Development Act to bridge a €200 billion investment gap. Mistral AI is pouring €1.3 billion into a Swedish data center. The EU isn’t writing position papers anymore. They’re pouring concrete.
And the geopolitical signal from the other side of the Atlantic? In January, the US extracted a head of state from his own capital and called it law enforcement — bypassing eastern-made defense systems without a single American casualty. The same month, the sitting president openly discussed acquiring Greenland — a sovereign European territory — and Denmark responded by deploying troops. Whether you call Venezuela an operation or a kidnapping, and whether Greenland is a negotiation or a threat, depends entirely on which newspaper you read. Nobody disputes the capability being demonstrated.
Those who watched Ford v Ferrari remember Henry Ford II, chest puffed, certain that the Second World War was won in his factories. The production line as geopolitical weapon. The arsenal of democracy.
The factories now are data centers. The production line is GPU clusters. The arsenal is an API endpoint.
Hands on Their Holsters
Mao Zedong wrote that “every Communist must grasp the truth: political power grows out of the barrel of a gun.” He wasn’t being metaphorical. The line came from Problems of War and Strategy, delivered in 1938, at a moment when the question of who controlled the means of force was indistinguishable from the question of who would govern.
The barrel has changed. Swap guns for GPU clusters and the logic holds. The means of production aren’t steel mills anymore — they’re data centers drawing hundreds of megawatts and inference endpoints serving billions of requests. Whoever controls the compute controls the ceiling of what any nation can automate, surveil, or defend.
The US has the GPUs, the cloud platforms, the foundation models, and — until recently — the safety researchers who were supposed to ensure those models didn’t eat the world. What it’s building now is the argument that these assets are sovereign infrastructure deserving of sovereign protection. Anthropic’s distillation disclosure isn’t just a terms-of-service complaint. It’s a filing in the court of national security.
Everyone sees this. Europe, China, Russia, India, the UAE — everyone. The scramble isn’t about who builds the best model. It’s about who controls the territory the models run on. The silicon. The power. The data. The guardrails.
Hollywood has been projecting this for a decade — half the Bond films and three-quarters of the Marvel universe told us the next battlefield is digital. Everyone nodded along. Now the players have their hands on their holsters, and suddenly it’s not fiction anymore.
The question I asked in that Brazilian conference room fifteen years ago hasn’t changed. The Kaspersky-versus-Symantec debate was the same debate wearing different clothes. The technology evolves. The vendors rotate. The flags on the server racks change.
But the question underneath — who do you want watching? — that one is permanent.
The only honest answer has always been: build your own. Control your inventory. Reduce your dependency on any single foreign supplier for short and medium-term operations. I did exactly that on a sovereign cloud project years ago, and the results were counterintuitive to everyone except the people running the infrastructure: security levels went up, operational costs went down, and the proof accumulated over years, not quarters.
Sovereignty isn’t a policy paper. It isn’t an act of parliament. It isn’t a press release timed to a news cycle.
It’s an inventory you control, a supply chain you can see end to end, and the operational discipline to maintain both when the ground shifts.
Everything else is theater.
Stefano Schotten is a Principal Infrastructure Architect and the founder of URE. The opinions expressed in this article are solely those of the author. Neither the author nor URE has any commercial, employment, or advisory relationship with Anthropic, OpenAI, NVIDIA, DeepSeek, or any other AI company referenced in this piece. All claims attributed to specific companies are sourced from their official public statements.