Building Safe Systems: Starting from Introspection
Umair Siddique
reasonX Team

Building Safe Systems: Starting from Introspection
The past decade has revealed a growing disconnect in safety engineering; not with the discipline itself, but with how safety professionals approach rapidly evolving autonomous technologies.
Picture this: A design engineer walks into a safety review (sounds like the start of a joke, but for many teams, it's the beginning of a headache). They're presenting a sophisticated perception system that processes gigabytes of point cloud data, neural network inferences, and multi-sensor fusion streams. The safety engineer responds with the same template they've used since assessing windshield wiper signals.
This isn't just a generational gap; it's a dimensional one.
The One-Dimensional Safety Mindset
Some engineers use safety analyses for a world of one-dimensional signals. "Ah, voltage over 5V? Bad! Under 3V? Also bad! Next problem!" But modern sensor streams demand more nuanced thinking. I have witnessed this interesting exchange:
Design Engineer: "So, we need safety requirements for the sensors and perception stack..."
Safety Engineer: "Simple. I see two failure modes: incorrect sensor data and loss of sensor data, incorrect world estimate and loss of world estimate..."
Design Engineer: "Right... but what exactly do you mean by 'incorrect'? We're processing 240,000 points per second in 3D space..."
Safety Engineer: confidently flips through standard "Section 7.4.5.2 clearly states we must identify incorrect data."
The Complexity Gap
The disconnect becomes even more apparent when safety analyses treat complex data streams like simple signals. While development teams grapple with subtle edge cases (we hope they do), like distinguishing between a paper bag and a small child in varying lighting conditions, they're often met with safety assessments that read like they were written for a 1980's assembly line robot.
Simply identifying "incorrect sensor data" as a failure mode is not enough (although a good starting point). It's like telling a chef their food shouldn't "taste bad" without specifying whether you mean undercooked, over-seasoned, or plated upside-down.
Beyond the Comfort Zone
This expertise gap isn't about the validity of safety standards, they remain crucial guardrails. Rather, it's about safety professionals who haven't ventured beyond their traditional comfort zones to truly understand the technologies they're assessing.
We need safety engineers who can engage in meaningful discussions about:
- Perception accuracy thresholds
- Neural network confidence scores
- Sensor fusion reliability metrics
Not just quote standard clauses like magic spells warding off liability.
The Path Forward
The path forward requires safety professionals to recognize that standards are a foundation, not a fortress. The days of treating every system like a simple voltage signal are behind us (though I'm sure some still miss them, life was simpler when your biggest worry was a stuck relay).
To truly serve their crucial role, safety professionals must blend their safety expertise with deep domain knowledge of modern technologies. Only then can they provide the nuanced, technically-informed guidance that helps design teams build safer systems more effectively, rather than leaving them to decode what "incorrect data" means for a system processing millions of data points per second in real-time.
Building Bridges, Not Walls
At reasonX, we believe the future of safety engineering lies in bridging this gap. Our platform is designed to help safety professionals understand complex autonomous systems while providing development teams with safety guidance that actually makes sense for their technology.
Because when safety engineers and design engineers speak the same language, we all build better, safer systems.
