[SystemSafety] Comparing reliability predictions with reality

Gareth Lock gareth at humaninthesystem.co.uk
Wed Feb 26 16:42:48 CET 2025


Agreed about single points of failures in complex, adaptive and emergent systems.

Avionics might be ‘reliable' at a technology level, but can they be reliable in a complex system which involves people and their interactions? AF447 as an example.

In the safety science space, there isn’t a model of success, because at some point success and failure start from the same point in time and space. Both success and failure emerge from complex interactions and this is the premise of Safety-II/Resilience Engineering. 

Scott Snook’s ‘Friendly Fire’ is a great book on this topic. "A basic assumption for the analysis was that we cannot fully understand complex organisational events such as this shootdown by treating them as isolated events. What Snook found in his research were largely normal people, behaving in normal ways, in a normal organisation. Accidents can occur from the unanticipated interaction of non-failing components. Independently benign factors at multiple levels of analysis interact in unanticipated ways over time, often leading to tragedy.” - https://www.mindtherisk.com/literature/150-friendly-fire-the-accidental-shootdown-of-u-s-black-hawks-over-northern-iraq-by-scott-a-snook 

Normal Accident Theory from Perrow also addresses this. In complex (socio-technical) systems, failure is inevitable.

Research/books by Hollnagel and David Woods relating to Resilience Engineering are worth pursuing if anyone is interested. 

Regards

Gareth Lock MSc 
Owner, Trainer and Coach
m. +44 7966 483832
w. humaninthesystem.co.uk
LinkedIn: https://www.linkedin.com/in/garethlock/

Malmesbury | Wilts | SN16 9FX
Transforming teams. Unlocking human potential.



> On 26 Feb 2025, at 15:35, Robert P Schaefer <rps at mit.edu> wrote:
> 
> hi, Peter.
> 
>  sorry, for some reason this went to my spam folder.
> 
>  I have worked on my share of successful large projects and failed projects. 
> 
>  I cannot discern a difference that can be tied to one single thing. 
> 
>  And once you look at many different things, causality quickly gets lost.
> 
>  I’ve read, and believe in, Reason's swiss cheese model of faults, I wonder if there’s an equivalent swiss cheese model of success?
> 
> bob s.
> 
>> On Feb 24, 2025, at 11:51 AM, Prof. Dr. Peter Bernard Ladkin <ladkin at causalis.com> wrote:
>> 
>> Bob,
>> 
>> On 2025-02-24 16:38 , Robert P Schaefer wrote:
>>> short comment, processes that are viable on small projects do not (and I believe cannot) scale to large projects
>> 
>> You must surely know that I am aware of scaling problems. I do not go with your "cannot".
>> 
>> What do you say to the manifest success with the reliability of critical avionics?
>> 
>> PBL
>> 
>> Prof. Dr. Peter Bernard Ladkin
>> Causalis Limited/Causalis IngenieurGmbH, Bielefeld, Germany
>> Tel: +49 (0)521 3 29 31 00
>> 
> 
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20250226/14fd00f5/attachment.html>


More information about the systemsafety mailing list