[SystemSafety] HROs and NAT (was USAF Nuclear Accidents prior to 1967)

Andrew Rae andrew.rae at york.ac.uk
Mon Sep 23 11:50:53 CEST 2013


There's a phenomena on the borderlands of science and media that doesn't
have a good label, but goes something like this ....

Someone makes a statement, typically a hypothesis about human behaviour. It
has a "strong" and a "weak" interpretation.  The weak interpretation is
something reasonable, usually actually completely unremarkable. It isn't
new, and most people can agree with it without a lot of evidence.  The
strong interpretation is a lot more specific, and quite controversial.
Strong evidence would be needed to support this interpretation.

Whenever the strong interpretation is attacked, the proponent of the
hypothesis falls back on the weak interpretation, and says "who can
disagree with this". As soon as the criticism goes away, they revert to
making the strong interpretation again, with no better evidence.

HRO is a fairly typical example of this phenomena. I don't think anyone
disagrees with the weak interpretation, where organisation attitude,
behaviour and structure has an influence over safety. Even naming
particular organisational characteristics as likely to contribute to safety
isn't controversial. The characteristics that HRO chooses had all been
pointed out before, and many of them are simply basic safety management.

The strong interpretation is that there is empirical support for naming a
particular set of characteristics as the most important, and that this
support comes from identifying particular organisations as safety
over-achievers. You can't support this strong interpretation via the weak
interpretation. The weak interpretation is a _fallback_ position that
requires abandoning the strong interpretation. What's left is not HRO.  It
is exactly the same space that Normal Accidents, Disaster Incubation
Theory, HROs, Vulnerable System Syndrome, and (tangentially) STAMP, have
been trying to fill. We know that organisation structure and attitude
matters, but we don't have a successful model for how it matters. (I'm
deliberately avoiding a definition of "successful" here. Choose one from
reliable/repeatable, makes accurate predictions, is practically useful for
safety management).   I put STAMP tangentially into that list because it is
oriented more towards "practically useful" than "has explanatory power".
Each model deserves to be evaluated against its own claims.

Normal Accidents does something very similar to HRO in terms of strong/weak
interpretations. It builds a not-particularly-controversial foundation
about structural causes of accidents (raising some genuinely new insights
along the way), but then draws much more specific and controversial
conclusions. You can agree with everything in the book up to the final
chapter, and still disagree with the take-away actions. The strong evidence
required to support the take-away actions is not what the rest of the book
provides - it is entirely illustration of the not-controversial,
less-specific principles.

The strong/weak interpretation phenomena is what leads to really messy
arguments, particularly when the ideas are summarised and popularised.
Q: "Do you agree with Normal Accidents?"
A: "All but the 5% of it that you are probably thinking of when you asked
the question".



My system safety podcast: http://disastercast.co.uk
My phone number: +44 (0) 7783 446 814
University of York disclaimer:
http://www.york.ac.uk/docs/disclaimer/email.htm


On 23 September 2013 09:26, ECHARTE MELLADO JAVIER <
javier.echarte at altran.com> wrote:

>  (Lost, as in, they're still out there somewhere. One in a swamp in one
> of the southern states, one in the water off the coast of Greece, I
> think.). ****
>
> Was in spain.****
>
> http://en.wikipedia.org/wiki/1966_Palomares_B-52_crash.****
>
> In fact a very funny incident… with our prime minister” in the beach… in
> order to show  no danger in the area…****
>
>
> http://www.bbc.co.uk/mundo/noticias/2013/01/130116_espana_palomares_bomba_perdida_cch.shtml
> ****
>
> ** **
>
> *De:* systemsafety-bounces at lists.techfak.uni-bielefeld.de [mailto:
> systemsafety-bounces at lists.techfak.uni-bielefeld.de] *En nombre de *John
> Downer
> *Enviado el:* domingo, 22 de septiembre de 2013 21:31
> *Para:* systemsafety at lists.techfak.uni-bielefeld.de
> *Asunto:* Re: [SystemSafety] USAF Nuclear Accidents prior to 1967****
>
> ** **
>
> I can't get too sucked into this (I'm a bit overwhelmed, sorry) but I feel
> I ought to at least weigh in a little. ****
>
> ** **
>
> *On the question of how many accidents we've had with the bomb:* It's
> definitely more than a few. I appreciate Andrew's point that bureaucracies
> often categorize near-accidents quite liberally, but I'm not sure that's
> the case here. The Fifteen Minutes book I pointed to before does a good job
> of highlighting some of the more significant near-misses and explaining
> their significance. There's lots, you start losing count after a while.
> Even Wiki has a pretty decent breakdown of some highlights <
> en.wikipedia.org/wiki/List_of_military_nuclear_accidents >. The US
> really did lose a couple of H-bombs, (Lost, as in, they're still out there
> somewhere. One in a swamp in one of the southern states, one in the water
> off the coast of Greece, I think.). And let's remember that atomic bombs
> contain a lot of *very* poisonous material. Accidents with the bomb can be
> pretty catastrophic even if they don't detonate. And these are just the
> accidents we *know* about. It's difficult to overstate the amount of
> secrecy around the bomb. A large number of the old accidents we are now
> reading about in books like the one Peter highlighted would have been
> missing from books written even a decade ago. We just didn't know about
> them until recently. We can only guess how many more there were, especially
> as we get closer to the present. I've spoken to people who say there is
> stuff that isn't in the books, and are in a position to know.****
>
> ** **
>
> Plus, If we think of 'the bomb' in a wider sense, as the entire system of
> nuclear deterrence (early warning infrastructure, etc), as Sagan does, for
> instance, then we find even more accidents with even more catastrophic
> potential. There have been a bunch of occasions where institutional or
> technical mishaps and misunderstandings have (very) almost led to nuclear
> war! The crash in Alaska was one. If it had taken out the early warning
> station there, then the US would have assumed an attack was underway and
> launched its own (read Fifteen minutes). Sagan talks about more (glitches
> in US early warning systems during the Cuban missile crisis, for instance).
> I briefly commented on a couple of others in an old blog post: <
> http://blog.nuclearphilosophy.org/?p=138 >. Nuclear history is
> existential.****
>
> ** **
>
> ** **
>
> *On the HRO / NAT debate:* I'm not sure why the discussion went in this
> direction, but since it did, and it's interesting, I'll add my ten cents
> (or is it two? I can never remember my American colloquialisms). Basically,
> I don't think the two have to be antithetical (even if they are often
> construed that way).****
>
> ** **
>
> The essence of *HRO*, as far as I can see, is the argument that: (1)
> There are there are organizational dimensions to making complex
> socio/technical systems function safely. (2) Some systems are better at
> these tricks than others, and have invested a lot of time in figuring them
> out. And so (3): Sociologists should investigate and document the tricks of
> organizations that do safety well, to see if there are any universal (or at
> least transferable) principles that other organizations can benefit from.
> ****
>
> ** **
>
> It strikes me that this is a worthwhile endeavor, even if I'm sometimes
> skeptical of of its findings and the wider implications drawn from them.**
> **
>
> ** **
>
> I'm a bit more wary about caricaturing *NAT* as I know Chick is on this
> list, but I'll have a stab and he can correct me if I'm way off
> base. Fundamentally I think NAT wants to say a couple of different things:
> ****
>
> ** **
>
> The first is that there are more fundamental structural factors (ie:
> financial incentives) underlying any organizational practices, and it is
> misleading to think we can attend to the former without also recognizing
> the latter. I buy this, and I think a lot of HRO people wouldn't
> necessarily disagree.****
>
> ** **
>
> The second, and not necessarily related, point NAT wants to make is that
> there are fundamental organizational reasons to believe that *no matter
> how perfectly we design (or operate) complex, tightly-coupled systems, they
> will always be prone to some level of failure* (ie: the normal accident).
> (This has been explicitly conceded by the more prominent HRO people.) And,
> further, that there are a some systems that we would think very differently
> about (and perhaps wouldn't have built at all) if we recognized this. ****
>
> ** **
>
> I buy this argument as well. In fact I've tried to make the same point in
> my own work, albeit in a different way. And I think it is this
> argument that is most pertinent to the bomb discussion. The question it
> raises, I guess, is how safe is safe enough. I think the NAT response to
> Nancy's points would be that the bomb could be *simultaneously**: *(a) an
> engineering marvel, and (b) a *really* bad idea. The infrastructure of
> deterrence almost caused thermonuclear war (ie: the end of the world) on
> several occasions. Our pursuit of nuclear energy very nearly led to the
> loss of Tokyo, and still might (ie: if the spent fuel pools go down in
> Fukushima). Personally, I'm not too comfortable with technologies that only
> "almost" devastated the world, and a bit reluctant to marvel at the
> technical brilliance that kept them (only just, and with a lot of luck)
> from doing so. I'd be more impressed if we'd declined to build them at all.
> ****
>
> ** **
>
> I'm quite attached to the world. It's where I keep all my stuff.****
>
> ** **
>
> J.****
>
> ** **
>
> ** **
>
> ---------****
>
> Dr John Downer****
>
> SPAIS; University of Bristol. ****
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> On Sep 22, 2013, at 10:33 AM, Andrew Rae <andrew.rae at york.ac.uk> wrote:***
> *
>
> ** **
>
> ** **
>
>
>
> ****
>
> I don't think the article suggests that the North Carolina incident was
> unknown - the "new" information appears to be the specific quotes about **
> **
>
> the safety switches. From past revelations of this sort, I expect the 700
> incidents will turn out to be seven hundred records in a recording system
> which includes****
>
> numerous handling errors, stubbed toes, incorrectly filled-out forms, and
> a few already widely discussed items of general safety concern. ****
>
> I'm cautious about reading too much into the nuclear weapons safety
> record. The big weakness of HROs as a theory is that it selects
> organisations based on how bad
>  their safety record "would have been". This sort of counter-factual
> reasoning ties my brain in knows - "we should look at what HROs are doing
> because they are safer than we would expect them to be based on what they
> are doing ... I mean based on what they are doing except for those bits
> that make them safe ... you know, they are doing some dangerous things
> which we shouldn't copy but some things that make it safe anyway that we
> should copy, and we know which is which because it fits with our
> preconceived notions of what people should do to be safe". ****
>
> I think that the nuclear weapons engineering community has likely done a
> lot of things right, and a lot of things poorly. Unfortunately we don't
> have enough data to use empirical methods to determine which is which. I've
> got my fingers crossed we never get that data ...****
>
> ** **
>
> [No, I'm not a fan of HROs or Normal Accidents. I don't think it's a
> sociologist thing though - I have a lot of time for the work of Barry
> Turner, Nick Pidgeon and John Downer. Maybe the trick is to spend just
> enough time around engineers to understand how we think, without spending
> so much time that you start to think in exactly the same ways.] ****
>
>
> ****
>
> My system safety podcast: http://disastercast.co.uk
> My phone number: +44 (0) 7783 446 814
> University of York disclaimer:
> http://www.york.ac.uk/docs/disclaimer/email.htm****
>
> ** **
>
> On 22 September 2013 14:22, Nancy Leveson <leveson.nancy8 at gmail.com>
> wrote:****
>
> Sorry, I didn't read the Guardian article because I'd heard about the
> North Caroline incident 20 years ago and thought it was public knowledge as
> it is widely talked about in public forums. I'm not sure who it is secret
> from as everyone I know in the nuclear safety world knows about it. I went
> back and read the Guardian article about some "700" incidents. It will be
> interesting to find out what the author of the book is referring to. It is
> hard for me to believe there have been 700 incidents that nobody knows
> about, but perhaps the DoD is better at keeping these things quiet than
> they are about other supposedly secret incidents. ****
>
> ** **
>
> Nancy****
>
> ** **
>
> On Sun, Sep 22, 2013 at 9:02 AM, Dick Selwood <dick at ntcom.co.uk> wrote:***
> *
>
> Nancy said "The fact that there was one near miss (and note that it was a
> miss) with nuclear weapons safety in the past 60+ years is an astounding
> achievement."
>
> The article in the Guardian that Peter cites makes it clear that there
> were several near-misses
>
> d****
>
>
>
>
> ****
>
> On 22/09/2013 10:53, Peter Bernard Ladkin wrote:****
>
>  While we're indulging in second thoughts....
>
> On 9/21/13 8:10 PM, Nancy Leveson wrote:
>
> ****
>
> I'm not really sure why people are using an incident that happened 54
> years ago when engineering was
> very different in order to make points about engineered systems today. ***
> *
>
>
> John Downer pointed out on the ProcEng list yesterday evening that
> Schlosser also wrote an article for the Guardian a week ago in which he
> pointed out the relevance of his historical discoveries for the present,
> namely concerning the UK Trident deterrent.
>
>
> http://www.theguardian.com/world/2013/sep/14/nuclear-weapons-accident-waiting-to-happen
>
> So he seems to think it is currently relevant.
>
> For those who don't know, Trident is a US nuclear multiple-warhead missile
> carried on British-built and UK MoD-operated submarines, one of whom is
> always at sea. The maintenance and docking base is in Scotland, at Faslane
> on the West Coast. Scotland is to vote on independence from GB (which will
> become LB if so) next year, and the putative government has said it will
> close the base at Faslane. Further, the Trident "so-called British
> so-called independent so-called deterrent" (Harold Wilson) replacement will
> cost untold amounts of money (we have been told, but no one quite believes
> what we have been told :-) ). Many senior politicians and a large
> proportion of the concerned public think that money would not so be well
> spent.
>
> It is obviously relevant to all these deliberations to assess how
> dangerous the old kit really is. Given recent events which have shown US
> and UK government agencies concerned with national security in a light
> which has resulted in many citizens losing their trust, I would think any
> technical assessment such as this, independent of government agencies, of
> matters relevant to renewing or revoking Trident is a welcome contribution
> to the debate.
>
> PBL
>
> Prof. Peter Bernard Ladkin, Faculty of Technology, University of
> Bielefeld, 33594 Bielefeld, Germany
> Tel+msg +49 (0)521 880 7319  www.rvs.uni-bielefeld.de
>
>
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
>
> ****
>
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2013.0.3408 / Virus Database: 3222/6688 - Release Date: 09/21/13
>
> ****
>
> ** **
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE****
>
>
>
> ****
>
> ** **
>
> --
> Prof. Nancy Leveson
> Aeronautics and Astronautics and Engineering Systems
> MIT, Room 33-334
> 77 Massachusetts Ave.
> Cambridge, MA 02142
>
> Telephone: 617-258-0505
> Email: leveson at mit.edu
> URL: http://sunnyday.mit.edu****
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE****
>
> ** **
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE****
>
> ** **
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20130923/ad594a6d/attachment-0001.html>


More information about the systemsafety mailing list