[SystemSafety] Functional hazard analysis, does it work?

DREW Rae d.rae at griffith.edu.au
Wed Jan 20 03:20:07 CET 2016


Matthew,

My opinion/experience only - I don't have any strong evidence for this.
There are three broad areas risk assessment can mess up. There's the
non-deterministic bit equivalent to external validity for research - are
there things that are outside the bounds of your analysis or that are
missed through the modelling and abstraction process? There's the internal
completeness question discussed by Peter, which is in theory determinstic
if and only if you can create a complete causal model of your system for
the properties of interest, and keep it accurate. Then there's the question
of whether you understand the identified risks well enough to mitigate
them, and to ensure that the mitigations are reliable.

FHA is reasonable at the first and last ones - the imaginative anticipation
side of things - for things that can naturally be described as a set of
functions. NB - anything _can_ be described by functions, but the more you
"force it to fit", the harder it is to extrapolate what might happen. For
the good-fit systems, hazards are the same type of thing as the functions,
they're just unintended and dangerous. A good example is hazard analysis of
software control components within a larger system. It's fairly
straightforward for people familiar with the larger system to consider what
the software is supposed to do (the functions) and project what else it
could conceivably do. Functions interacting with other functions is less of
a problem than you might think, because once you're in the space of
thinking about what the system _could_ do, instead of what it is meant to
do, interaction is just a potential cause of the dangerous behavior, not a
brand new dangerous behavior.

Systems for which FHA is a bad fit are those where the hazards are
intuitively a different type of thing than the functions. "Chemical plant"
is an easy one to see the difficulty, but for similar reasons database
software is also a bad fit.

When it comes to internal completeness, FHA/FFA is awful. In particular,
the output is very hard to review. For that reason I've always taught that
it should be used as a tool to support design, not as a way to validate
design. I'm sure we've all got opinions about the best type of risk
assessment to apply to a completed design, but FHA/FFA ain't it. It works
best "quick and dirty" to create quick turn around and better designs,
followed by something more rigorous to "verify" the design against the
safety requirements.

That's where modes create problems though. Considering diffferent modes of
operation blows out the size and time of the FFA, so that it lags behind
the design. Worst example I've seen, they applied FFA to the _requirements
document_, finishing two years after the design was finalised.


My safety podcast: disastercast.co.uk
My mobile (from October 6th): 0450 161 361

On 19 January 2016 at 12:20, Matthew Squair <mattsquair at gmail.com> wrote:

> Thx Andy,
>
> Though I'm not a Dr, that's the wife. :))
>
> Matthew Squair
>
> MIEAust, CPEng
> Mob: +61 488770655
> Email; Mattsquair at gmail.com
> Web: http://criticaluncertainties.com
>
> On 19 Jan 2016, at 12:25 PM, andy <loeblas at comcast.net> wrote:
>
> Dr. Squire;
>
> I have had these same kinds of questions in the past.  I have studied the
> relationship between probabilistic and non-probabilistic risk assessment
> mostly as a result of a project I worked on for the U.S. Nuclear Regulatory
> Commission regarding digital systems reliability versus non-digital systems
> for safety critical power reactor control.  I have also studied the
> statistical work executed by the London folks on common cause failure and
> defense in depth.  I believe probabilistic risk assessment is a
> bureaucratic, reductionist, and none to complete analysis of risk
> assessment focused on a “guns and guards” mentality dominant in the USA.  I
> have written, 3 or 4 years ago, white papers on my conclusions and readings
> and done some graphic representations of the NRC regulations on common
> cause failure.  I have studied Nancy Leveson’s systems approach and taken
> her week long course, also 3 or 4 years ago, and I have developed a
> favorable disposition towards her conclusions.  My white papers were
> written to keep my own thinking organized and I can look for any of the
> products I developed for this purpose as well as share my bibliographies
> with you, although some of the documents from the city college folks in
> England were given to me as a professional courteousy and these references
> might be listed but not available for re-distribution according to my
> agreement.
>
>
>
> Let me know if any of this would be useful to you.  It will take me a week
> or two to relocate the digital versions of this stuff.
>
>
>
> andy
>
>
>
>
>
>
>
> *From:* systemsafety-bounces at lists.techfak.uni-bielefeld.de [
> mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de
> <systemsafety-bounces at lists.techfak.uni-bielefeld.de>] *On Behalf Of *Matthew
> Squair
> *Sent:* Monday, January 18, 2016 7:43 PM
> *To:* systemsafety at lists.techfak.uni-bielefeld.de
> *Subject:* [SystemSafety] Functional hazard analysis, does it work?
>
>
>
> A question to the list.
>
>
>
> Does the process of functional hazard analysis 'work' in terms of
> identifying all functional hazards that we are, or should be, interested
> in?
>
>
>
> The way the FHA process is defined in the various standards seems IMO to
> be very reductionist in nature, fine for identifying the specific
> consequences of a single functional failure mode, but what about functional
> interactions, multiple functional failures, the interaction of modes with
> functions and so on.
>
>
>
> The background to this is that the project I'm working with is about to
> commit to a significant campaign of 'FHA'-ing. So we're engaged in a little
> bit of professional navel gazing about the efficacy of the technique before
> we commit to the campaign.
>
>
>
> --
>
> *Matthew Squair*
>
>
>
>
>
> BEng (Mech) MSysEng
>
> MIEAust CPEng
>
>
>
> Mob: +61 488770655
>
> Email: MattSquair at gmail.com
>
> Website: www.criticaluncertainties.com <http://criticaluncertainties.com/>
>
>
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20160120/9b256f79/attachment.html>


More information about the systemsafety mailing list