[SystemSafety] Functional hazard analysis, does it work?

paul_e.bennett at topmail.co.uk paul_e.bennett at topmail.co.uk
Tue Jan 19 13:31:57 CET 2016


On 19/01/2016 at 7:49 AM, "Peter Bernard Ladkin" <ladkin at rvs.uni-bielefeld.de> wrote:

[%X]

>Ah, the question of completeness! Which some people think is of 
>its nature not answerable in the
>positive. Hazard analysis teams spend much of their time 
>discussing whether the analysis is
>complete, and any decent HazAn method comes with a usually 
>informal relative completeness test
>(many of them are not "decent"!). So it seems to me very odd that 
>people also claim that
>"completeness is impossible".

Completeness may be a difficult objective to prove in a mathematical 
sense but fortunately we are only required to utilise our best efforts in
this regard, using the most up to date and emergent industry accepted
techniques that are relevant to the development.

>I think it is necessary during a HazAn to formulate an objective 
>criterion of relative
>completeness and show you have identified all possible hazards 
>according to that criterion. Then
>ask yourselves what phenomena there are which are not covered by 
>the criterion and attempt to
>characterise those.
>
>One way to formulate the criterion is to develop an ontology. The 
>system consists of a collection
>of objects with properties and relations between them. List them. 
>All. Then you can argue that
>functional hazards are those hazards which are expressible in that 
>vocabulary and with a bit of
>luck and a lot of rigor you can list them all and show that you 
>have done so.

One of the things the 00-55 or 00-56 list attempted to do, and which I 
have also added to. I have posted my list to Andy (who requested a copy).

>This is what we do and it works. We have a name for it: 
>Ontological Hazard Analysis (OHA).
>
>You might like to look at Daniel Jackson's talk "How to Prevent 
>Disasters" from November 2010 in
>http://people.csail.mit.edu/dnj/talks/ It, and the ensuing 
>discussion on the York list, arose out
>of Daniel's observation through use of formal analysis that an 
>example in Nancy Leveson's book did
>not render a complete hazard analysis. Jan Sanders had a go at the 
>example with OHA and found some
>features which Daniel's analysis had also not identified. The 
>discussion on the York list is
>archived at https://www.cs.york.ac.uk/hise/safety-critical-
>archive/2010/ and starts with Daniel's
>message of October 10, 2010 entitled "software hazard analysis not 
>useful?". I should probably
>write a summary at some point, since this issue recurs.

Being a STEM Ambassador, I am also visiting schools when they run
careers days. One student asked "What is the most important attribute
for the accomplishment of your job". My answer was "Imagination". I 
explained that you needed to be able to imagine all the ways the system
would be used and abused and evaluate what hazards would emerge from
such. So, no matter if you think you have a way to gain a complete list
of hazards that apply to your system and its use, there are still probably
ways in which it can throw you a new hazard to contend with.

Regards

Paul E. Bennett IEng MIET
Systems Engineer

-- 
********************************************************************
Paul E. Bennett IEng MIET.....<email://Paul_E.Bennett@topmail.co.uk>
Forth based HIDECS Consultancy.............<http://www.hidecs.co.uk>
Mob: +44 (0)7811-639972
Tel: +44 (0)1392-426688
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************



More information about the systemsafety mailing list