← Bookmarks 📄 Article

Personal Taste Is the Moat | A Geek's Page

As AI commoditizes code correctness, the real engineering moat becomes taste—the judgment to reject technically sound solutions that shouldn't exist, proven by a Linux kernel maintainer NACKing an AI-approved patch.

· software engineering
Read Original
Listen to Article
0:005:13

My Notes (1)

"Taste is judgment compressed by time"

Summary used for search

• A Linux kernel maintainer rejected a patch that passed AI review—not because it was broken, but because it bloated sk_buff, fixed symptoms instead of root causes, and hid constraints from userspace
• AI excels at evaluating if code fits existing rules; taste decides if the rules themselves are being bent in the wrong direction
• The paradox: better AI makes human judgment MORE valuable by removing mechanical friction and surfacing harder questions like "should this abstraction exist at all?"
• When correctness becomes cheap, differentiation shifts to who decides what belongs in the system—judgment formed by years of exposure to great work and living with consequences
• Steve Jobs was right: taste means "exposing yourself to the best things humans have done and bringing those forward"

The author argues that AI has commoditized technical correctness in software engineering—it can review patches, spot bugs, and explain trade-offs—but cannot answer whether something should exist. This requires taste: judgment compressed by time, formed through studying great systems, watching bad ideas fail, and understanding long-term consequences. The author demonstrates this with a real example: rejecting a Linux kernel patchset that had passed AI-based reviews. The patch wasn't broken—it solved a real problem with technically coherent code—but was wrong in ways that only matter long-term: it bloated sk_buff size, fixed symptoms (infinite loops) instead of root causes (enqueuing to root qdisc), and introduced kernel-internal limits invisible to userspace.

The key distinction is that AI evaluates whether changes fit the rules, while taste decides whether the rules themselves are being bent wrong. AI excels at pattern matching and local correctness, but systems like the Linux kernel are shaped by collective taste accumulated over decades—countless judgment calls about what belongs and what doesn't. These are properties you only feel after years of exposure, not patterns you can extract from data.

The paradox is that better AI makes human judgment MORE important, not less. As AI removes mechanical friction, it surfaces harder decisions that can't be answered with more data: Should this abstraction exist? Is this trade-off worth locking in? Will this design age gracefully? When execution becomes cheap and correctness abundant, the differentiator moves up the stack to who can recognize bad directions before it's too late and say "this works, but it shouldn't exist." In the AI era, personal taste—cultivated through exposure to the best work humans have done—becomes the moat.