Context got commoditized. Translation is next.
When my company’s acquisition closed in 2024, I thought about pursuing a psychology degree in the US. The impulse was the same one that drives URE: wanting to understand how things are wired under the hood. My wife shut it down—“Really? You know that’s not going to work”—and she was right, though neither of us fully understood why at the time.
What I was actually chasing wasn’t psychology. It was context.
A few months ago, I grabbed dinner with a friend who’s a senior partner at one of the largest consulting firms in the world. His company’s India headcount is five times their US presence. He spent a good chunk of the evening talking up how talented Indian data scientists are—and they are.
Then we hit a wall: if India has all that talent, why does AI actually get built in just two places—New York and the Bay Area?
He didn’t have an answer. And honestly? That’s when the conversation got interesting.
There’s something that happens when two people from completely different worlds—neither of us AI specialists—start chewing on a problem together. No jargon to hide behind. No tribal assumptions. Just two guys with different mental models trying to make sense of something that doesn’t quite add up. We went back and forth for an hour, poking holes in each other’s theories, building on half-formed ideas. The kind of conversation you can’t plan—it just happens when curiosity meets the right conditions.
We landed in the same place: lack of context.
The Layers of Context
What do we mean by “context”? Break it down like infrastructure:
Product context — “For who and for what.” This lives in the heads of product managers who sit in rooms with American customers, absorbing needs that never get written down.
Cultural context — Communication norms, social assumptions. Small example: opening an email with “Good morning, hope this message finds you well” reads completely wrong in Portuguese. It’s not a translation problem—it’s social wiring.
Feedback loops — Real-time iteration. Dogfooding happens where the builders live.
Economic context — What people will pay for, how they think about value. You have to live in the market to feel this.
India’s headcount could execute. Build the model, train the weights, optimize inference. But the specification—what should this thing actually do—that was locked inside US geography.
This was a problem with no solution.
Until it wasn’t.
What Just Changed
The first time I used DeepSeek, I was stunned. It wasn’t just the MoE performance—the model understood the American context. I’ve been doing business with Chinese companies for a decade, sourcing hardware for AMTI. I learned early that building common ground takes work; neither side were native English speakers, and neither thought like Americans.
Yet here was a Chinese company that built an LLM that could “speak American.”
People say DeepSeek pulled this off by distilling OpenAI’s outputs. Controversy aside, the mechanism matters less than the outcome: a Chinese company extracted the American juice.
Whether they distilled it from GPT-4 responses, scraped the American internet at scale, or reverse-engineered the vibe from Reddit and Twitter—the result is the same. Context went from tacit to explicit to portable.
Mistral proved it too, from a different angle. The geographic lock on context is broken.
Think about it in infrastructure terms:
- Pre-2023: Context was tacit, geo-locked. Like on-prem, single datacenter.
- 2023-24: Context became extractable through distillation. Replication, disaster recovery.
- 2025+: Context is trainable, commodity. Multi-region, edge-deployable.
The “American juice” used to require American bodies in American offices having American conversations. Now it’s a training artifact. You can import it.
Why They Still Fail
But let’s not get carried away.
Picture a founder from Argentina—or Algeria, or Australia—developing some novel approach. A super-performant AI system designed to disrupt stock trading, targeting firms in the Wall Street corridor.
I’m not saying it’s impossible. I’m saying the odds approach zero.
Companies in New York have been trading with FPGAs and ML since those were novelties. The engineers who are fluent in HFT, in ultra-low-latency systems, in the specific physics of market microstructure—they’re already there. They’ve been there for two decades. They have the co-location. They have the regulatory relationships. They have institutional memory that’s never been written down.
Here’s the question: where would you go for the best lobster? New England or Ohio?
Ohio can get lobster. Cold chain logistics exist. FedEx exists. But the guy standing on the dock in Portland, Maine is eating it hours out of the water. The restaurant in Ohio is fighting physics and economics to serve an inferior product at a higher price.
Context works the same way. Some of it travels well. Some doesn’t.
General cultural context? Commoditized. DeepSeek proved it. American SaaS product patterns? Documented, scrapable, trainable.
But domain context—the kind that lives inside HFT firms, regulatory bodies, and the muscle memory of specialists who’ve been doing this for twenty years—that still has geographic gravity. You can’t distill a lobbyist relationship. You can’t scrape microsecond latency advantages. That context isn’t sitting on the internet waiting to be slurped.
DeepSeek can speak American. DeepSeek cannot trade like a Wall Street quant desk.
The Fourth Layer
Remember the psychology degree?
Recently I had a conversation with a personal friend—someone with a degree in anthropology and a master’s in behavioral psychology. She was curious about how AI actually works, amazed at how much it had streamlined her daily life.
I’m no specialist, but I walked her through my mental model: data, information, context. Three layers.
I was feeling pretty good about myself.
Then she added a fourth layer.
Translation.
Not language translation. Not Whisper converting speech to text. Something deeper: the translation of context back to the user, in a form they can actually absorb.
It stopped me cold.
Context isn’t just about what the model understands. It’s about how the model delivers that understanding to a specific person, in their frame, in their language, in their mental model. The “American juice” isn’t just comprehension—it’s output calibration.
This is probably old news to the AI labs. But for me, sitting in my living room getting schooled by an anthropologist, it reframed everything.
Data. Information. Context. Translation.
Four layers. And we’ve only just commoditized the third.
Same Signal, Different Value
Imagine Stephen Hawking had access to all the AI capacity available today. Imagine he could craft a precise prompt, drawing on his entire life’s work. Imagine multi-billion-step reasoning chains, heuristic pipelines, the full weight of modern AI infrastructure processing his question.
The output would be extraordinary. No question.
Now imagine that somewhere among eight billion people, someone else—a different theorist, a scholar of his work, a 25-year-old Princeton researcher, an electrician on another continent—writes the same prompt. And gets the same answer.
Even a stopped clock is right twice a day.
The improbable part isn’t writing the same prompt. It’s receiving the same answer. And we’re heading toward that world—models are converging, outputs are standardizing, the generation gap is shrinking.
So what does that answer mean to each of these people?
To Hawking, it’s a tool. To the Princeton researcher, it’s a shortcut. To the electrician, it might be noise—or it might be the spark that changes everything. Same signal. Radically different value.
This is what I mean: the pipeline is only half the equation. The AI-generated signal needs translation to create real value. Observing culture. Understanding behavior and habits. Recognizing there’s no single source of truth in life. Recognizing that value only exists in the mind of the receiver.
Being foreign-born in the US is a blessing for exactly this reason. Your “axioms” crack within the first week. Everything you held as true reveals itself as what it always was: a consensus opinion, locally optimized.
Context got commoditized. Translation is next. And translation, unlike context, might never fully escape the human loop.
What’s Still Yours
Amazon laid off 16,000 people last week. The “AI is coming for your job” fear is constant, and it’s not irrational.
But here’s what the fear misses: pipelines optimize for prediction. They don’t optimize for meaning. If your value is fungible knowledge—context that can be scraped, distilled, commoditized—then yes, you’re exposed.
But if you have agency in the translation layer? If you’re the one who takes the signal and makes it mean something to a specific person, in a specific moment, for a specific purpose?
That’s the part the pipelines can’t predict. That’s still yours.