[SystemSafety] Bounded rationality or ignorance?

Peter Wilkinson Peter.Wilkinson at noeticgroup.com
Fri Oct 12 00:38:06 CEST 2018


You might add the "availability heuristic" in relation to rare events which makes (or should make) the use of risk matrices largely redundant

Peter Wilkinson
General Manager – Risk | Noetic Group
        
Ph: +61 2 6234 7777 | Mob: +61 4 3418 4025 
peter.wilkinson at noeticgroup.com | www.noeticgroup.com




I acknowledge the traditional custodians of the lands and waters where 
I live and work, and pay my respects to elders past, present, and future.

The information contained in this email may be confidential or commercially sensitive. If you are not the intended recipient, any use, disclosure or copying of this document is unauthorised. If you have received this document in error, please delete the e-mail and immediately notify the sender.

-----Original Message-----
From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de> On Behalf Of Matthew Squair
Sent: Friday, 12 October 2018 9:28 AM
To: Peter Bernard Ladkin <ladkin at causalis.com>
Cc: systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] Bounded rationality or ignorance?

When it comes to making decisions about risk there’s a number cognitive biases that can affect our decisions. For example:

- Confirmation bias (as Peter mentioned),
- Omission neglect,
- the narrative fallacy,
- availability bias, and
- framing effects/prospect theory

The less the information the stronger these biases can become, which leaves us with a definite problem when we try to reason about rare events.

In effect the rarer the event is the more bounded we are in our rationality. 


Regards, 

> On 12 Oct 2018, at 1:10 am, Peter Bernard Ladkin <ladkin at causalis.com> wrote:
> 
> 
> 
> On 2018-10-11 14:59 , Olwen Morgan wrote:
>> 
>> 
>> Here's an example of .....: A project has decided to use MCDC 
>> coverage as the test coverage domain to be used in unit testing. The 
>> aim is to get 100% MCDC coverage of developed code units. Owing to 
>> slippage, a manager decides that to make up for lost time, the project will stop unit testing when 80% MCDC coverage has been achieved.
>> 
>> Here we have (typically), a manager who does not realise the risks 
>> involved in settling for only 80% coverage. Is this a "cognitive limitation" or just ignorance?
> 
> What about a case of acting on confirmation bias? There are only risks 
> if you believe that the software is going to fail tests and you are trying to find the tests it will fail.
> 
> But a manager is unlikely to believe the software is going to fail 
> tests. The manager has seen the inspections and what they achieved, 
> believes in hisher prowess, experienced the progress of the project in 
> terms of apparently-working LOC, seen individual unit tests, believes 
> the software more or less works because heshe has seen what effort has gone into it and what has come out (working SW, at some level). It works, doesn't it? And it's HISHER project, what heshe is paid to do.
> 
> Testing is an overhead; no more "product" (= LOC) is thereby produced. 
> But you have to do some (due diligence; besides acceptance testing is in the contract). But surely not more than "necessary" ...
> 
> PBL
> 
> Prof. Peter Bernard Ladkin, Bielefeld, Germany MoreInCommon Je suis 
> Charlie
> Tel+msg +49 (0)521 880 7319  www.rvs-bi.de
> 
> 
> 
> 
> 
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE

_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE


More information about the systemsafety mailing list