Since 2023, I’ve been studying applied AI almost exclusively. I don’t pretend to be a data scientist or ML engineer. Honestly, I don’t think giving up more than twenty years of infrastructure, performance, and security engineering would be smart. I’d end up like a duck: swims, flies, and walks, but doesn’t outperform at any of them.
It’s impossible not to get caught up in the vibe-coding thing. I’m not here to criticize anyone shipping and prototyping. A few months back, I heard one of the smartest things anyone’s said about AI, from Naval Ravikant. I’ve been listening to him for a few years now, and his takes are consistently good. I don’t remember the exact words, and I’m not going to chase videos or quotes to nail them down, but it was close to this: “There is no disruption caused by AI. The novelty we’re seeing is the abstraction and conversion of human language into computing language.” Brilliant.
I’ve been chewing on several angles and pieces of analysis for a while, and what I keep landing on is that human behavior prevails, no matter what’s happening on the technology side. People polarize. People pick sides, pick a jersey, and then, out of nowhere, pick an “AI API provider” to call “mine.”
The swing. The love-it-or-hate-it. The old line that there are no atheists in a falling plane, or when the pocket gets pinched. Plenty of people out there are criticizing “Anthropic’s lobotomy” (one week in, one week out). Plenty more are calling the new OpenAI model “worse than 4o.” People canceling subscriptions because “the model hallucinates,” but at the end of the day, they still want AI to solve things they can’t solve themselves.
The other day, I was on a call with the CFO of a law firm, and she asked me a question: “Is there a way for AI to actually be intelligent?” She doesn’t come from tech, and that’s perfectly fine. It’s a good question. She’s a user. As a consumer, she’s buying a product that claims to deliver something she hasn’t been able to extract. Well, that’s a marketing thing.
Imperfect by design
Humans are far from perfect. We want an AI that mimics a human, so how can we expect it to be perfect? It gives me the chills when I see someone bragging that they built an AI setup that “codes for me while I sleep.”
Gen AI is heuristic. What does that mean? It means it is built to propose a path toward a solution, not to own the problem for you. There are many valid paths to the same result, and AI can help surface one faster. That is not so different from the way the abacus evolved into the calculator, then into the RPN calculator, then into Excel, then into the geeks’ beloved pandas, and now into the genuinely ingenious NVIDIA cuDF (groundbreaking territory for what is coming, no doubt). Excel gave math more reach, but it did not create mathematical judgment. It can abstract a good chunk of the work, but it will not decide what problem is worth solving, whether your assumptions are sound, or whether the answer makes sense in the real world.
In the last decade, I’ve had the pleasure of developing a few projects for my alma mater in Brazil, and along the way, I got into an interesting conversation about a close friend who had a hard time defending his doctoral thesis on a heuristic algorithm. The fact that each run produced “different results” was being read by some committee members as a flaw. Math is deterministic. Most computer programs, at their core, are and should be deterministic, always returning the same result. Think 1+1+1+1 equals 4, the same as 2+2, the same as 3+1, the same as (-2)+6. 4 is 4. The way you mow your grass, no. There’s no playbook for which path you should take. You might have preferences, you might have ideas, you should have the tenacity to understand that if it’s raining, it’s very likely not the best moment. AI can help you if you show it a blueprint and ask, “Which is the most efficient pattern? I own these and those machines.” But how YOU’LL do it, that’s up to you.
Human augmentation and the value
My daughter, Maria Pennacchi Schotten, just hit a really nice milestone: her first permanent piece of art is going up at her middle school. She built a Rubik’s Cube mosaic with more than a thousand cubes. It was a great project we designed and developed together. One of the challenges was finding the right ratio for an animal portrait using only the six available colors, so it would have depth instead of reading as a flat drawing. She ended up working with five. Green didn’t make the cut.
So we spent a few hours putting together a ComfyUI flow and then downsampling the image, doing palette reduction and dithering, all in Python. Maria doesn’t code, but she was right next to me, calling the shots. She had a clear picture of what she wanted.
In the end, I did two more deterministic things: bought the cubes and hit “Ctrl+P.” From there on, it was up to her.
It was something to watch Maria work on each cube as the canvas came to life. The result? You can see it below. I’ll add a YouTube time-lapse later.

AI can preview, compress, translate, simulate, and accelerate. It doesn’t own taste, intent, responsibility, or execution. Sure, it can enhance your capabilities and surface solutions, but you’ll still have to grab the screwdriver yourself and know how to replace the fuse.
My wife used to joke, a few years back: “I’ll get interested in AI when it can change a diaper.” Let’s see where it takes us. For now, I’ll tell you this: AI won’t solve any problem you can’t handle by yourself. At the same time, it can absolutely enhance your capabilities. Maria can pixelate and assemble any artwork she wants. But her work gets more deterministic and efficient when she can use AI to preview her creation.
AI gives you a path. It does not own the judgment.