AI, AI, AI

03/21/2026·9 min read

AISoftware EngineeringTechnology

Note: this post was co-authored with Claude Code.

Since December 2025, I haven't written a single line of code. Not a function, not a config file, nothing.

And I'm more productive than ever.

Something has fundamentally changed.

A Brief History

  • 2010s — Deep learning and neural networks. AlphaGo beat the world champion in 2016. Impressive, but it felt far away from the kind of software most of us were building.
  • 2020 — GPT-3. It could write essays, generate code, hold a conversation. Not always well, but well enough that something felt different.
  • 2021 — GitHub Copilot. AI inside the editor for the first time, suggesting entire functions. I started relying on it pretty quickly.
  • Nov 2022 — ChatGPT. 100 million users in 60 days. My parents were asking me about it. My dentist brought it up. That had never happened with any piece of software before.
  • 2023 — GPT-4, Claude, Gemini, Mistral. A new serious model every few months, each one making the previous feel outdated.
  • 2024–2025 — Models stopped just answering questions and started actually doing things. Running code, using tools, executing multi-step tasks on their own.
  • 2026 — Fully autonomous AI agents are the norm. Voice interfaces mean you don't even need to type — and we're only in March.

AI in Software Engineering: My Journey

It started with basic tab completion — your IDE suggesting the next word based on what you'd typed. Useful, but nothing special.

Then the large language models arrived — and suddenly the tool wasn't just finishing your sentence, it was understanding what you were trying to build.

Then I started using Claude and ChatGPT for everything — generating boilerplate, understanding unfamiliar codebases, refactoring. I was still writing most of the actual code, but the boring parts were handled. I still needed to choose the right context files and tell it exactly how to do things.

That changed in mid-2025. Something clicked — the agents got genuinely capable. I'd describe what I wanted in plain language and an agent would write the code, the tests, handle edge cases, and ask me to review. My job became telling it what I wanted and checking the output. Sometimes I don't even care how it implemented something. There are parts of my own codebase I haven't read.

For real production work, I lean on spec-driven development — I write out what I want built, feed it to the agent through the Superpowers plugin, and let it go. It works well even on real production codebases with actual product features. Not hello world projects. This is agentic engineering — not vibe coding.

By December 2025, I stopped writing code entirely.

And in 2026, with the new /voice command in Claude Code, I don't even type anymore. Space to speak, enter to send.

Andrej Karpathy described the same thing happening to him. He discussed the shift in programming and "token throughput", observing that the fundamental nature of software engineering has transformed. Humans are no longer bottlenecked by typing speed or compute, but by their own ability to delegate tasks to AI.

"I kind of went from 80/20 of writing code by myself versus delegating to agents... I don't think I've typed like a line of code probably since December."

"Code's not even the right verb anymore. I have to express my will to my agents for 16 hours a day."

"You can move in much larger macro actions — it's not just here's a line of code, here's a new function. It's here's a new functionality, and delegate it to agent one."

Mitchell Hashimoto shared a similar experience building Ghostling:

"I didn't write a single line of... anything. Agents wrote 100% of everything you see incl. Nix flakes, CI jobs, etc. I reviewed every line of code manually and constantly nudged the agents in the right direction."

Peter Steinberger put it well in a recent post:

"These days I don't read much code anymore. I watch the stream and sometimes look at key parts, but I gotta be honest - most code I don't read. I do know where which components are and how things are structured and how the overall system is designed, and that's usually all that's needed."

Claude Code Blows My Mind. Daily. Sometimes Hourly.

Bugs that used to take hours now take minutes. Features that would've taken days or weeks are now done in hours.

I don't need to spell out steps anymore. I just describe what I want and it figures out how to get there on its own.

And I can't keep up with what it can do. This seldom happened before — I'm pretty good at keeping up with bleeding edge technology. Every day, every hour, there's something new. I literally can't keep up — and that's just Claude Code. The same is happening with ChatGPT, Gemini, OpenClaw, and a million other things all at once.

It also cured my procrastination entirely. I open it every day, multiple hours a day. I'm more than addicted at this point.

I've genuinely never experienced anything like it.

Is Traditional Software Engineering Dead?

Naval recently talked about this and his take is the one I agree with most. He put it simply:

"AI won't replace programmers, but rather make it easier for programmers to replace everyone else."

Software engineers, he says, are now "among the most leveraged people on Earth." With a fleet of AI, a single programmer can be 5 to 10 times more productive. The best will achieve "supernormal" outcomes — "there are programmers now who are going to come up with ideas that can replace entire industries."

And thinking in code still matters — while AI lets non-programmers build in plain language ("vibe coding"), engineers have a massive advantage because they understand what's underneath. "All abstractions are leaky" — when AI makes mistakes or builds something broken, it takes someone who actually understands the system to fix it.

And at the frontier — novel architectures, high-performance code, edge cases the model hasn't seen — "a good engineer operating at the edge of knowledge of the field is going to be able to run circles around vibe coders."

AI, in his view, is just "a massive new layer in the abstraction stack that programmers have always used."

I agree with all of this. All you need now is an idea. The cost to build it is close to zero.

As Naval said, "I really do think this is a golden age for programming."

Terence Tao's Take

Tao recently spoke about what he sees as a "cognitive Copernican revolution." Just as humanity had to accept that Earth isn't the center of the universe, we're now having to accept that human intelligence isn't the only valid form of intelligence — and that these new forms have fundamentally different strengths and weaknesses from our own.

One of his sharpest observations: AI has driven the cost of idea generation down to almost zero. That sounds like pure upside — but it creates a flood of generated theories, including a lot of noise. The new bottleneck isn't generating ideas. It's verifying them. Human peer review systems are already becoming overwhelmed, and science will need entirely new structures to filter signal from the torrent of AI-generated hypotheses.

There's a subtler cost too. Before AI, manually searching for a paper in a library often led to accidental discoveries — stumbling across a result you weren't looking for that turned out to matter enormously. AI instant-delivers exactly what you ask for. That efficiency is real. But it destroys the creative randomness that has quietly driven so much unexpected scientific progress. Tao estimates AI has roughly doubled his output on auxiliary tasks — but he's watching carefully for what gets lost in the optimization.

His conclusion isn't that AI will replace mathematicians anytime soon. He sees a long era of hybrid collaboration — smart humans and extremely powerful AI tools, each covering the other's blind spots.

Bubble, or Just Getting Started?

Howard Marks wrote a memo on this and his take is more nuanced than a simple bear case. On the technology, he's actually bullish — he thinks AI's potential is more likely underestimated than overestimated, and that the infrastructure spending is validating itself through real revenue growth. The big established hyperscalers, he says, are unlikely to be ruinously overvalued. The AI startups with multi-billion-dollar price tags, though — he calls those lottery tickets. Most will be worthless. A few will win big.

One data point he highlights: Goldman Sachs compared the valuations of the largest TMT stocks today against the Tech Bubble peak in December 1999. The median NTM P/E back then was 41x — as of December 2025, it's 31x.

His real concern isn't the bubble question. It's what happens to society. He draws a sharp line between the AI of a few years ago — which made workers faster — and what we have now, which actually does the work. That's not labor-saving, it's labor-replacing. And it's happening faster than any previous technological shift. Faster than offshoring. Faster than society can adjust. He's "terribly concerned" about what that means for joblessness, for purposefulness, for what people do when they don't have to work — and doesn't buy the optimist argument that new jobs will simply materialize to replace the old ones.

On the All-In podcast and at his GTC keynote, Jensen Huang points out that demand is already way beyond the handful of big tech companies most people picture — regional clouds, sovereign AI, enterprises, factories. And the economics work differently than people assume: the more expensive, advanced infrastructure actually produces cheaper tokens because of the throughput gains. He's projecting $1 trillion in computing demand through 2027. Beyond that he sees physical AI transforming heavy industry, and healthcare having its own ChatGPT moment within a few years.

"You're not going to lose your job to AI. You're going to lose your job to somebody using AI."

— Jensen Huang

He's also pushed back on the idea that software engineering as a career is dying. His distinction: the purpose of a software engineer and the task of coding are related, but not the same. In a recent interview: "I wanted my software engineers to solve problems. I didn't care how many lines of code they wrote... Solving problems, working as a team, diagnosing problems, evaluating the result, looking for new problems to solve, innovation, connecting dots — none of that stuff is gonna go away."

Ben Thompson at Stratechery has the most structural argument against the bubble thesis. His framework: three LLM inflection points, each one systematically addressing the flaws of the previous.

ChatGPT made LLMs readable and useful — but they hallucinated, and you had to actively manage the output.

o1 made LLMs reliable and essential — reasoning models self-evaluate before answering, catching mistakes internally so you don't have to.

Agents completed the picture. An agent directs the model, checks if the result actually works, and tries again if it doesn't — without humans in the loop. In paradigm one, an LLM generated code. In paradigm two, it thought about the code. In paradigm three, an agent verifies the code runs.

One thing stood out to me in his analysis: what made December 2025 the inflection point wasn't the Opus 4.5 model release — it was changes to the Claude Code harness. The integration between model and harness is where agents are actually differentiated. That's why this isn't commodity software: profits flow to integrated systems, not modular parts. It's why you can't just swap out the model and get the same result.

His demand argument is also more nuanced than most realize. You don't need widespread consumer adoption for compute demand to skyrocket. One person with agency can direct multiple agents simultaneously. A small number of people wielding AI effectively is enough to drive enormous economic output and massive compute consumption.

His conclusion: every weakness of the original LLMs is being systematically mitigated. The companies building integrated model+harness systems — Anthropic and OpenAI — are more durable than they appeared even a year ago. The capex is warranted.

"I don't think we're in a bubble (which, paradoxically, maybe is the truest evidence we are)."

— Ben Thompson, Stratechery

Personally, I'd prefer to go with the optimist. Marks ends his memo with something a friend wrote to him: "I'd rather be an optimist and wrong than a pessimist and right." Because most industries haven't even started yet. Most companies are still at the "experimenting with a chatbot" phase. Most people would be shocked by what the current models can do.

To me, this is still the very beginning — and we're just getting started.