My Fitbit Buzzed and I Understood Enshittification
Your Fitbit buzzes to tell you you're exercising—and in that annoying notification lies the entire mechanism of how metrics-driven product development inevitably makes products worse, one "engagement optimization" at a time.
Read OriginalMy Notes (1)
- Individual contributors need to demonstrate value
- Demonstrating value requires metrics
- Metrics create incentives
- Incentives shape behavior
- Behavior optimizes for the metric, not the user
Summary used for search
TLDR
• Enshittification isn't malicious—it's the inevitable result of product owners needing to prove value through metrics, leading them to add features (like unwanted notifications) that boost engagement numbers even when they annoy users
• The mechanism: Individual contributors need metrics to demonstrate value → metrics create incentives → behavior optimizes for the metric instead of the user, with each step being locally rational but cumulatively hostile
• You can't win the metrics arms race by adding more metrics—people will game whatever measurement system you create, spending more energy making metrics look good than making products actually good
• The alternative is having principles ("don't interrupt users unless they ask") that you defend without needing data, even when it feels arbitrary or leaves value on the table
• Software design is an exercise in human relationships—when we reduce those relationships to metrics, we lose the ability to say "this would be rude" and treat users like people instead of engagement vectors
In Detail
Kent Beck uses his Fitbit's annoying "It looks like you're exercising" notification as a window into understanding the mechanism of enshittification. He argues it's not that companies decide to make products worse—it's that the metrics-driven product development process systematically creates this outcome. A product owner building automatic exercise detection needs to prove the feature works and is valuable, so they add a notification. Now they can measure engagement—users are seeing and responding to the feature. The numbers go up, the feature is deemed successful, and the product owner keeps their job. When users complain, they add a setting to turn it off but default it to "on" to keep the metrics high.
The core mechanism is deceptively simple: Individual contributors need to demonstrate value → demonstrating value requires metrics → metrics create incentives → incentives shape behavior → behavior optimizes for the metric, not the user. Each step is locally rational, each person is doing their job, but the cumulative result is a product that becomes progressively more hostile to users. Beck provides another example: messaging apps with prominent call buttons that are easy to accidentally tap, positioned that way because someone's job depends on "calls initiated" going up. The solution isn't more sophisticated metrics—you'll never win that arms race because people will be extremely clever about gaming whatever measurement system you create.
The alternative Beck proposes is uncomfortable: you need principles that you defend without data. Principles like "don't interrupt the user unless they explicitly asked you to" or "don't put buttons where they'll be accidentally pressed." These aren't measurable, you can't A/B test them (or rather, you'll lose to the variant that violates them because its numbers will be better), and they require someone to say "we just don't do this" and defend that line when the metrics-driven arguments come. This is fundamentally about treating users like people instead of engagement vectors—software design as an exercise in human relationships. When we reduce those relationships to metrics, we lose the ability to recognize that something would be rude, and products decay one "engagement optimization" at a time.