[SystemSafety] Fwd: Contextualizing & Confirmation Bias

Peter Bernard Ladkin ladkin at rvs.uni-bielefeld.de
Thu Feb 6 10:55:12 CET 2014


On 2/5/14 11:57 PM, Derek M Jones wrote:
> and confirmation bias does not disappear just because a person's
> belief is found to agree with reality.
> 
> All of the turkeys suffered from confirmation bias, including the
> one that is pardoned.

It does, actually. When the turkeys all survive to a grand old age because the farmer likes them and
has no intention of killing them we call it a successful empirical induction.

We are discussing this topic because there has been a suggestion that confirmation bias (CB) is
somehow associated with system safety assessed via a safety-case regime.

Now, such a phenomenon would be important and worrying if true, which is why we are discussing it.
However, on closer inspection I find it unlikely that such a suggestion can be unpacked in any
ultimately meaningful way, for various reasons.

First, it seems there is a lack of clarity as to what a safety case is (necessarily so, for there
are many different notions, so the suggestion has to be evaluated separately for each notion, and
the answers might well be different for different notions).

Second, CB is a psychological phenomenon which is notoriously tricky to handle conceptually, as my
example of the vicar, and yours of the turkeys (when you take my and Matthew's observations into
account) show.

Third, since CB is a psychological phenomenon, it has to be shown to occur when individual assessors
are faced with a safety case. Now, what is it that assessors are actually faced with? They are faced
with documents consisting of rigorous and semi-rigorous arguments and supporting evidence, as well
as probably a couple of people who produced the argument in the documents. That is so whether they
are assessing those documents under IEC 61508 (in which case they are part of a safety case) or
under civil aerospace certification (in which case, according to some, they are not part of a safety
case). If any psychological phenomenon is to occur here, and lots of them do, it is going to occur
equally in both circumstances. I can't see that there could be a systematic difference on the
psychological level.

Fourth, I am imagine I am the only person on this list who has been regularly teaching logical
reasoning to generations of students for decades (both formal logic and so-called informal logic). I
think back - have I seen any systematic influence of confirmation bias in the ability of people to
assess arguments? Not really, once they've learnt what we try to teach them. This impression is
reinforced by the study I cited, which is the only one I found which seems to have addressed the issue.

Fifth, I work closely with people who assess safety-critical systems for a living, a half-dozen to a
dozen of them. I respect their capabilities greatly. They turn out mostly to work in a safety-case
environment. I don't see any difference in the way they approach assessment, under a safety-case
regime, from the way in which such assessments in, say, civil aerospace are approached. Indeed, I
know companies, and people in companies, who present the same cases for the same kit in both
civil-aerospace certification proceedings and safety-case proceedings. (I mean, why would they be
different?) I don't see any indication of anything I could call a systematic bias.

Sixth, I do see regulation-induced biases on the level at which psychological phenomena operate, and
think I know what they look like. IEC 61508-3:2010 includes a "Route 2S" for the assessment of
previously-used SW which is claimed for a new use as "proven in use". The requirements are weak -
they say in effect "adequate documentation to show that the likelihood of any systematic dangerous
faults is low enough......" as well as that the proposed operational environment is "sufficiently
close to" that of the previous use from which data on (lack of) occurrence of systematic dangerous
faults is taken. Now, the problem here for some of us is that most people don't know what "adequate
documentation" looks like, or how "close" is "sufficient", and when they do find out they say
"that's completely unreasonable! Nobody has that kind of material!" Assessors are put under pressure
to accept kit under weaker conditions than those necessary in the current state of the art, and
providers are faced with rejection of what they had supposed were adequate arguments on grounds
which they do not understand. Those are both regulation-induced biases. The fix is, of course, to
spell out what constitutes adequate evidence, in a generally-comprehensible manner. Which is what my
colleagues and I have been working on for nearly four years now. Now, we'd have to do that whether
the situation was a safety-case regime or not. Whether that documentation is part of a safety case
or part of written-reasons-why-this-system-is-acceptable-but-not-part-of-a-formal-safety-case (if
there is such a creature) plays no role whatsoever.

So, if you don't mind, I'll file this utterance with those other sayings, such as "formal methods
don't work" and "IEC 61508 is dangerous" that sound good to some when issuing out of important
mouths but whose grand meanings evaporate when you unpack them down to the daily grind.

PBL

Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany
Tel+msg +49 (0)521 880 7319  www.rvs.uni-bielefeld.de






More information about the systemsafety mailing list