[SystemSafety] The bomb again

Matthew Squair mattsquair at gmail.com
Tue Oct 8 07:43:14 CEST 2013


Isn't the question of whether you trust their efforts really a variant of
the agency dilemma? And isn't that what 'design' of the socio-technical
system should address, and what a methodology such as STAMP can assist you
in doing?

As far as 'safety of the bomb' goes the designer labs have actually been
quite open about what they're doing. There are some good technical papers
published, and I believe Nancy references some of them in her book
Safeware. And having reread one of these papers after the latest
'revelations' you can see the labs considered response to the lessons
learned from Palomares and Goldsboro. That's what the strong link/weak link
S&A and 3I design principles are all about.

Looking at this from a HRO perspective, I'm inclined to draw the conclusion
that the only relevant difference between a HRO and any other is that a HRO
'pays attention' to warnings and doesn't push it's luck. Yes, absolutely
true that at Goldsboro it came down to a single line of defence using a low
reliability component. But conversely there was a defence in depth strategy
in place (even if flawed) that worked and the organisation learned from
what was, as nuclear accidents go, a near miss. A messy, chaotic and
contingent organisational response but still a response before something
worse happened.

Another point is that the designers recognised that detailed probabilistic
arguments as to safety (i.e risk based) weren't particularly persuasive and
deliberately chose to base the performance of their safety devices on
physical properties as they are highly deterministic. For example using a
capacitor as a 'weak link' component in an electrical circuit because the
capacitor will always open circuit in a fire.

I do find it interesting that they thought it necessary to adopt a
determinedly deterministic design approach to achieve a probabilistic
safety target of 10E-6. That stands in stark contrast to the belief in
other high consequence industries that one can objectively assess this risk
and have 'faith' in the numbers. Maybe there's a lesson in that?

To answer why the early accidents came so close to disaster in the first
place you need to delve into conventional weapon safety design and how it
was applied to the early nuclear weapons. The key challenge, that was at
the time not fully recognized, is that safety devices for a dumb bomb like
the use of safety pins and maintaining the firing channel 'out of line' did
not necessarily address the hazards associated with a weapon that needed to
be powered up to complete the firing sequence. That's not solely a problem
for nuclear safety, conventional 'smart' fuses have similar issues. Luckily
it was near misses (as nuclear accidents go) that drove a paradigm change,
rather than a nuclear catastrophe...

There is though still the issue of the physical stockpile and whether we
have confidence that the actual weapon is built (and maintained) to the
design. You'd think with this sort of thing that QC would be very very
tight, but the problem is the low numbers in production runs, which limits
the use of such techniques as SPC as well as problems in maintaining an
industrial capability to do the work, not to mention the problem of an
aging workforce. As test firings are out, you're left testing components,
which gives you much less confidence and as a result I believe Sandia labs
has declined to certify the latest generation of safety device
designs without a full up weapon test.


On Tuesday, 8 October 2013, Peter Bernard Ladkin wrote:

> You know, I just have to say this. It is all very well to talk about
> engineering assessments and the dedicated people etc., etc.
>
> But I worked in the nuclear industry once. When I was in college, and
> before, as an intern. I am ultimately responsible for certain computer
> calculations that assess the structural strength of certain UK pressure
> vessels (they are more critical in the UK with the gas-cooled designs).
> That they would withstand X and X and X, airplanes flying into them and so
> forth.
>
> They were performed with some of the first finite-element code based on
> some math of Tymoshenko. Compact enough to run on IBM 360 machines with
> ...gee... a few kilobytes of main memory. Huge!
>
> My boss, obviously, checked the results to make sure they agreed with what
> he had previously intuited. And he was very hard working! And his boss knew
> that, and that no one worked harder, and took his recommendation. And his
> boss. And his boss. (This was the early 70's - all males.)
>
> Maybe that code was OK, more or less. It certainly was thought to be OK in
> so far as it agreed with the pre-calculus intuition of my boss. And his
> boss who thought he was the bee's knees. And... and so on. But since then I
> got a degree or two, and even taught numerics at UC Berkeley as a TA, to
> undergrads and grads, so I know what can go wrong and how. And I don't
> necessarily regard code which coheres with the boss's prejudice to be
> confirmation that that prejudice is accurate. I regard it more as an
> example of how people chose evidence that suits.
>
> But, as I said to John, if anyone gets up and says to some parliamentary
> committee that such-and-such reactor has a pressure vessel demonstrated
> resistant to an aircraft impact, and that pressure vessel was one I worked
> on, then expect to see a contribution from yours truly, explaining how that
> certainty was manifestly socially generated.
>
> PBL
>
> Prof. Peter Bernard Ladkin, University of Bielefeld and Causalis Limited
>
>

-- 
*Matthew Squair*
MIEAust CPEng

Mob: +61 488770655
Email: MattSquair at gmail.com
Website: www.criticaluncertainties.com <http://criticaluncertainties.com/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20131008/fa7c885e/attachment-0001.html>


More information about the systemsafety mailing list