• home
XGitHubSubstackLinkedIn
  • home
XGitHubSubstackLinkedIn

AI coding (doesn't have to) suck

Why this resonated

These are my thoughts about the recent CJ’s video about coding with AI. It’s interesting because I have a lot of respect and see him as a genuine, thoughtful person, so I was eager to read what he thought.

CJ’s tone cuts through all the hype and contrasts with the guru-style prescriptions that flood Twitter. Workflows, prompts, and miracle setups that never survive real-world projects. And worse, when people push back on that, the general response is “skill issue”. If CJ has skill issues, then I don’t know what’s left for the rest of us.

For context, this is the video:

Hype vs. reality (and the discourse mismatch)

I appreciate people experimenting with state-of-the-art LLMs, as that’s where we can make advances and get insights. Still, I keep noticing a mismatch between what Twitter says it works, and what actually works for me in a day-to-day: short features, short chats, and a hard ceiling on complexity I let it manage.

I’m not anti-AI, I’m anti-theater. The further I push complexity, the more fragile and less deterministic things become, and the less time I spend solving the problem in front of me. It becomes work about work.

This also shows up with agent overload. Breaking work into increasingly specific agents feels tidy on paper, but in practice, errors compound and become harder to trace. I’m left wondering whether the bad outcome came from the wrong input or the wrong output, compounded across a chain.

How I use AI

I keep the surface area small: short features and short chats, with as much relevant context as possible. I point to the files that contain the logic, to the documentation, and to examples or references. If a feature is complex enough, I assume the model won’t be able to implement it.

There’s a meaningful difference between vibecoding and LLM-assisted coding. The former describes the feature or goal you want; the latter describes the steps (using technical and language-specific terms) that will produce that feature, then lets the model write the code for those steps. That’s where I get the most value: prop drilling, wiring types, and setting up endpoints with boring error handling and null checks.

I delegate the grunt work that I can verify: refactors, component scaffolds, and logic that I state the input, outputs, and constraints. And I write the instructions with the same care I’d put into an issue. I can’t code faster than the LLM, but it’s my responsibility to keep it grounded and correct.

Underneath all that, the core truth is: learn to code! First, because it’s still a joy, and making stuff is fun. Watching something exist where it wasn’t before is a magical feeling. But beyond that, it’s fundamental to know how to reason in code, untangle messy flows, and implement fixes. The AI won’t think for me; it will just do what I instruct. Plenty of tasks are trivial if I know the technical target, but they can become insurmountable obstacles if my only tool is “make this feature, don’t make any mistakes”.

My “aha!” moments show up when the model can’t do it: when the output smells like rotten spaghetti, when it writes code I wouldn’t want a `git blame` with my name attached to it, or when taking a step back, tidying, and refactoring reveals an elegant path. That feeling hasn’t gone anywhere. Offloading the grunt work frees up the limited number of lines I can write in a day (or delete, if it’s a really good day).

The vendors of complexity

I’m stealing that phrase from DHH because I feel it fits here: there’s a trend that more tooling equals more performance; that you need to learn this, or use that; they try to convince us, coders, that we need this new shiny tool, that conveniently they have an affiliate code, or sell a course about. I found it almost always false. When I find myself with a lot of tooling complexity, it usually masks gaps in my understanding. I’ve been skeptical of complexity my whole life, and this AI wave hasn’t changed that.

In the end

Anyway, it’s refreshing to hear from someone as thoughtful as CJ talk about coding with AI, and to throw a splash of cold water on the heated, overly optimistic hype that does the industry no good. I’m eager to hear the follow‑up video after one month of AI‑free coding. From my perspective, I wouldn’t go back to writing everything by hand, and I feel I’m way more productive than before. It lets me complete my vision faster and gives me more time to come up with clever solutions when AI can’t find its way. I’m confident our brains still do a better job most of the time, and programmers’ doom is nowhere near.

Handwritten signature illustration