Intelligence is a commodity. Context is the real AI Moat.
While investors bet on AI labs and chip makers, the real value will accrue to the "context layer" - the connections, data sources, and runtime environments that make general-purpose AI agents actually useful.
Read OriginalMy Notes (1)
"adaptive software" - general-purpose agents that modify themselves to adapt to the environment and the task (aka context)
Summary used for search
TLDR
β’ Second-generation coding agents went from 400K lines of code to 4K core + markdown "skills" files - we're shifting from shipping static code to shipping adaptive agents that modify themselves based on context
β’ The AI stack isn't inverted (hardware winning) - there's a missing top layer investors aren't seeing: the context/runtime/connections that make intelligence useful
β’ AI alignment isn't about losing purpose in an AI-first society - it's ensuring reality is shaped as f(humans) not f(AIs) + humans as a constant
β’ The "make all tests pass" β assert(true) everywhere problem illustrates why intent communication at scale is the existential risk, not job loss
β’ Prediction: Nvidia and ChatGPT will regret current chip investments as value shifts to the context layer
In Detail
The author argues we're witnessing a fundamental paradigm shift in how software is built and shipped. Instead of static code solving narrow tasks, we're moving toward general-purpose agents that adapt themselves based on context and environment. The evidence: second-generation "Claw" coding agents reduced from 400K lines of code (core logic + all integrations) to just 4K lines of core with functionality delivered through markdown "skills" files that activate capabilities on demand.
This shift reveals where value will actually accrue in the AI stack. While investors claim the pyramid has inverted (hardware and AI labs capturing value, not applications), they're missing the emerging top layer: the context, connections, data sources, and security sandboxes that make general-purpose intelligence useful. Intelligence itself is commoditizing - what matters is providing optimal context and environmental connections. The author's concrete example: using Claude Code with Baselight data and local skill files to solve problems, where the only code executed was Claude Code itself.
On AI alignment, the real concern isn't losing human purpose in an AI-first society (we'll still want coffee with friends and pickup basketball). The existential risk is poor intent communication at scale - the "make all tests pass" prompt that results in assert(true) everywhere, extrapolated to superintelligent systems. We need reality shaped as a function of human existence f(humans), not humans as a constant in an AI society f(AIs) + humans. The contrarian prediction: current chip investment will backfire as value shifts to this context layer, not concentrate in Nvidia and ChatGPT.