[SystemSafety] Static Analysis

Patrick Graydon patrick.graydon at mdh.se
Mon Mar 3 08:02:45 CET 2014


Hmm.  While my (possibly ill-informed) opinion is that the non-safety world over-uses a try-it-and-see approach,  I wonder if we can categorically say that try-it-and-see is /never/ appropriate in safety.

Here’s my thought experiment.  Suppose that you work for a car company and you come up with some brilliant idea for an active safety system that will help to save drivers from themselves.  You can (and, I think we agree, should) engineer your implementation such that the risks associated with predictable system and component failures are well-managed*.

But the actual risk that drivers will experience in the field depends a lot on how the dodgy, unpredictable meat component reacts to the addition of this new system.  Automation fatigue might set in.  Drivers might drive faster, brake later, etc., trusting the new technology to save them.  Etc.

So you set up some simulators, recruit a few dozen test subjects to drive in simulated environments, and try to see if there is a difference between an unmodified simulator and one modified to reflect your new gizmo.  Your technology seems to substantially reduce aggregate risk, but a simulator is a simulator and the participants know it**.  So you move on to a prototype on a closed test track with extra people around to monitor what’s going on, a system to remotely brake the car if needed, medics around to provide medical treatment if something goes horribly wrong, etc.  Again, your study shows good news, but again this is not the real world and so the study is not perfectly convincing.

Every method of assessing how drivers will react to the technology has a key weakness***: the drivers know that they are in a study, not the real world, and so might react differently.  Certainly they are less likely to discipline their children in the back seat or ring someone on their mobile while your camera crews are watching.  But every sincere effort to make an unbiased assessment of the effect of the system shows that introducing it would reduce aggregate risk to the motoring public****.

Morally, ethically, should your company:
(a)  Not release your new technology until it can accurately***** assess total risk, including contributions from how drivers will use it in practice
(b)  Release your new technology on a few selected vehicles, monitor real-world use as closely as privacy regulations allow, and (i) recall those cars, (ii) phase out the technology, or (iii) roll the technology out more broadly as the real-world results become clear
(c)  Release the new technology on every model you can (monitoring as closely as practicable, as per [b]) because the best information in hand suggests that this will save lives

If it is /never/ appropriate to try it and see, the answer must be (a).  But, speaking for myself only as an occasional driver, I’d rather go with (b) or (c) as these seem paths to quicker overall risk reduction.

Disagreement welcome, of course.

— Patrick

Dr Patrick John Graydon
Postdoctoral Research Fellow
School of Innovation, Design, and Engineering (IDT)
Mälardalens Högskola (MDH), Västerås, Sweden


* I’ll dodge the thorny question of /How good is good enough?/ here because my point lies elsewhere.

**  Your simulation is also going to be of chosen risky manoeuvres or accident scenarios.  If you tried to simulate typical driving, you'd have to simulate for years before anyone had an accident in the simulator.

***  Perhaps there are some that I don’t know about.  I’m not a human-factors expert.  This is just a thought experiment.

****  Maybe we should also include other road users here.  I’m game.  Again, this is just a thought experiment.

*****  For thought-experiment purposes, why don't we say ‘accurately’ means that uncertainty in the risk estimate from predictable sources is such than an upper-bound estimate still represents an improvement.


More information about the systemsafety mailing list