In safety-critical systems, we distinguish between *accidents* (actual loss, e.g... | Hacker News
Safety-critical systems don't prevent accidents—they prevent hazardous states, because you can't control the environment but you can control whether bad conditions would cause disaster.
Read Original Summary used for search
TLDR
• The key equation: hazardous state + environmental conditions = accident. Focus on what you can control (the system state), not what you can't (the environment)
• Aviation example: planes must land with 30+ minutes of fuel remaining. Less than that is a hazardous state—it didn't crash, but only because the environment cooperated
• Applied to parenting: don't make kids promise they won't fall off cliffs (uncontrollable outcome), make them promise to stay away from cliff edges (controllable state)
• This is a dynamic control problem with multiple controllers (pilots, ATC, computers, regulators) continuously adjusting to keep systems out of hazardous states
• Predicting hazardous states is vastly easier than predicting accidents, making this framework actually implementable
In Detail
The fundamental insight is reframing safety from preventing accidents to preventing hazardous states. The equation is simple: hazardous state + environmental conditions = accident. Since you can only control the system and not its environment, you focus on keeping the system out of states where bad environmental conditions would cause disaster. Trying to prevent accidents without tracking hazardous states means relying on the environment always being favorable—a strategy guaranteed to fail eventually.
In aviation, this manifests as the requirement that planes land with at least 30 minutes of fuel remaining (45 for turboprops). Landing with less fuel isn't necessarily an accident, but it's a hazardous state—it would only take bad environmental conditions to turn it into a crash. The system is designed so planes never enter this state. The author extends this to parenting: teaching a child not to promise they won't fall (an outcome they can't control), but to promise they'll stay away from cliff edges (a state they can control).
This is fundamentally a dynamic control problem involving multiple controllers—flight computers, pilots, ATC, dispatchers, regulators—all observing system state, running mental models of future evolution, and making control inputs to avoid hazards. When a system enters a hazardous state, it means some controller had inadequate feedback, inadequate mental models, or insufficient control authority. The framework's power comes from the fact that predicting hazardous states is much easier than predicting accidents, enabling proactive design rather than reactive learning from disasters.