[SystemSafety] Functional hazard analysis, does it work?

DREW Rae d.rae at griffith.edu.au
Wed Jan 20 14:06:54 CET 2016


Matthew,
I mean the real world and our understanding of it don't stand still. Take
for example, your FFA. It would have a higher immediate fidelity if you
performed it including modes than without including modes - but that means
it requires more effort. More effort means dedication of resource (so it's
not being performed by the people most familiar with the design), time
delay (so it falls out of sync with the evolving design) and reluctance to
admit error or update when things change (since it's a bigger, bulkier
thing to update). All of these things mean that if the FFA finds something
weird, it's as likely to be a problem with the FFA's concept of the system
as it is with the system itself - or worse, assumed to be a problem with
the FFA when it is actually a problem with the system. How many safety
recommendations get answered by "Oh, we already changed that" (hint - lots)
or "That actually can't happen because ... " (lots again). For every one of
those, there could be something the analysis _didn't_ find for similar
reasons.

Hints for achieving parsimony of analysis:
 1) Be viscious about pruning the initial set of functions, and getting
them neatly and consistently worded. If the system can't be described  in
less than 20 functions, you're analysing at a level inappropriate for
something as informal as FFA.

 2) Make sure the explicitly stated goal of the analysis is to identify
ways to improve the system. That's the deliverable, not the FFA tables.
Make sure everyone understands that the tables don't demonstrate anything,
so there's no point in making them elaborate, or worrying about whether
something gets recorded as an "omission" or an "incorrect", or whether it's
okay to combine mode 7 and mode 8 because the wording is exactly the same,
or any of the countless details and petty arguments that distract from
finding ways to make the system better.

3) Do it in pairs, and mandate that each pair must contain someone response
for actual design of the bit being analysed. (This is for psychological and
political reasons as well as technical - if key design team members are
burning time, there's immediate pressure to make it relevant to the design
and get it finished).

My safety podcast: disastercast.co.uk
My mobile (from October 6th): 0450 161 361

On 20 January 2016 at 22:41, Matthew Squair <mattsquair at gmail.com> wrote:

> Hi Drew I'm not sure I get why more effort in modeling would cause the
> model to drift away from the real world? Do you mean that there's a focus
> on 'truth in the model' vs a description of the real world?
>
> As a side note, one of the things I am not looking forward to is the death
> of a thousand FHA tables. Does anyone have any thoughts on how to achieve
> parsimony of analysis?
>
> Matthew Squair
>
> MIEAust, CPEng
> Mob: +61 488770655
> Email; Mattsquair at gmail.com
> Web: http://criticaluncertainties.com
>
> On 20 Jan 2016, at 9:28 PM, Martyn Thomas <martyn at thomas-associates.co.uk>
> wrote:
>
> On 20/01/2016 07:06, DREW Rae wrote:
>
> The more effort you put into creating an analysable model of the real
> world, the less that model looks like the real world and the greater the
> chance that the safety problems will exist outside the analysis altogether.
>
> Drew
>
> Somehow, you have to be satisfied that you understand well enough what you
> are trying to do. When you believe you have achieved this, wouldn't you
> agree that expressing your results formally can only be beneficial? Why
> would you choose to write things down informally if you had a way to do so
> formally and there were tools that would then tell you about errors and
> inconsistencies?
>
> Trivially, we can partition our task into two objectives.
>
> The first is to establish the functionality, interfaces to other systems
> and the safety and security properties that we need. The second is to
> implement these in a system and generate adequate evidence that we have
> done so successfully.
>
> The first is hard and inherently contains some steps that cannot be fully
> formalised.  (I'll assume we agree about that and leave any discussion
> about it to a separate thread). But once we have completed this objective
> to the extent that we consider to be sufficient to enable the second
> objective to proceed to a successful conclusion, it is possible to attempt
> a formal model of the functionality and properties that we have
> established.
>
> I don't see how doing so could possibly weaken the work we have already
> completed - indeed, it will probably reveal some oversights and
> contradictions that will help us to improve it.
>
> It is likely also to reveal some functionality or properties or interfaces
> that we cannot formalise. That's a useful warning sign, because it
> indicates areas where our knowledge is incomplete (perhaps inevitably so)
> and where we shall need to direct our best efforts to mitigate the risks
> that result from this incomplete knowledge.
>
> It will also give you a firm starting point for the second objective and,
> in my experience, reduce the cost of this second stage whilst improving the
> quality ( assessed on whatever measures you would like to choose).
>
> Martyn
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> <systemsafety at techfak.uni-bielefeld.de>
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20160120/963cc8a4/attachment.html>


More information about the systemsafety mailing list