[SystemSafety] "Lack of Imagination" greatest problem with hazard analysis

Robert P. Schaefer rps at mit.edu
Fri Sep 6 16:12:50 CEST 2019


A quick take on requirements elicitation, requirements and political will:

 - preconceived solution brooks no nay-saying, for whatever reason.
   including politically defining what is, or is not allowed as a requirement,
   what is, or is not part of the system to be considered
   what is, or is not, a hazard or risk, i.e. brexit
 - political action invites political re-action - for assumption of control,
   i.e. yes, there is climate change vs there is no such thing as climate change, vote for me!
 - the more requirements, the greater the likelihood of contradictory requirements
   i.e. OS security vs OS utility
 - cost of eliciting range of needs, i.e. What I need 20000 poll takers to 
   get to all the non-responders to filling out a census?
 - the cost of implementing expensive requirements, i.e. boeing and redundant sensors
 - software requirements are (almost) never stated as thou shall not
   because the "thou shall not" list is, literally, infinite,
   i.e. just try to implement (and test) "thou shall not ever crash" in a complex computer system, i.e. AWS
  

> On Sep 6, 2019, at 9:33 AM, Robert P. Schaefer <rps at mit.edu> wrote:
> 
> 
> this, more or less, points to the phase of requirements elicitation, and requirements elicitation (from my experience) is a political, 
> not a solely technical, domain.
> 
>> On Sep 6, 2019, at 9:28 AM, Peter Bernard Ladkin <ladkin at causalis.com> wrote:
>> 
>> Well, just to keep us on our toes, here is another quote from Risks-31.40
>> 
>> Apparently those of us who perform hazard analysis are guilty of lacking imagination. Of a solution
>> to this issue (perhaps micro-doses of LSD?) there is no suggestion. However, there is some rather
>> implausible analysis of some airplane accidents with the root cause identified as ....... lack of
>> imagination. I'll post the URL when it comes up on the Risks Forum WWW site.
>> 
>> 
>>> Date: Tue, 3 Sep 2019 13:28:17 -0400
>>> From: "R. G. Newbury" <newbury at mandamus.org>
>>> Subject: Frequency-sensitive trains and the lack of failure-mode analysis
>>> (Re: RISKS-31.39)
>>> 
>>>> Identifying all these failure modes in advance obviously takes more
>>>> expertise and foresight -- but is that really too much to ask of the
>>>> relevant experts?
>>> 
>>> It is a lack of imagination. The 'relevant experts' are often what Nassim
>>> Taleb calls Intelligent Yet Idiot. The experts transgress beyond their
>>> expertise and wrongly (and disastrously) believe that NOTHING CAN GO WRONG,
>>> beyond what they have considered. They lack the imagination to see other
>>> scenarios. In Taleb's words, they cannot see black swans, therefore no black
>>> swan can exist.
>>> 
>>> What is actually needed in the planning/design stage is to present the
>>> unexpected scenario to people who face the real situation every day, and ask
>>> them ``X has just failed. What can happen next? What do you do? What can
>>> happen then?''  And present it to *lots of people in the relevant
>>> field*. Some one of them will likely have experienced it, or recognized it
>>> lurking just out of sight, and *not gone there*.
>> 
>> 
>> PBL
>> 
>> Prof. Peter Bernard Ladkin, Bielefeld, Germany
>> MoreInCommon
>> Je suis Charlie
>> Tel+msg +49 (0)521 880 7319  www.rvs-bi.de
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> The System Safety Mailing List
>> systemsafety at TechFak.Uni-Bielefeld.DE
>> Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
> 
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety



More information about the systemsafety mailing list