[SystemSafety] FW: How safe is safe?

Peter Bernard Ladkin ladkin at rvs.uni-bielefeld.de
Tue May 7 19:00:58 CEST 2013


Andy,

On 7 May 2013, at 17:24, "Loebl, Andy" <loeblas at ornl.gov> wrote:
> I ran across this article today and while it discusses generalities, I thought it might be useful to some of us. 

It should be read in conjunction with what appeared to be a deliberately provocative previous article, which described the Fukushima Dai-ichi accident as a "success". Dick Selwood sent me the link, and Ken Hoyme, a co-designer of the Boeing 777 AIMS databus, sent it to the ProcEng list which we run. No one who said anything was particularly impressed by the arguments.

One engineering safety issue is as follows. The so-called "design basis" did not fit the environment in which the plant was placed. The operators of the plant had been repeatedly informed of this over the course of at least the last decade, as had the regulator. Instead of revisiting the arguments for the operational safety of the plant, and seeing what could be done (for example, moving electronics and emergency electrical equipment out of the basement to somewhere where it was unlikely to be flooded if the defences were overwhelmed), the operator and regulator appear to have all but ignored this information.

It's really hard to beat having a sociologist say explicitly in a well-read book in 2007 that the design is susceptible to accident scenarios through flooding, and then having it happen four years later, exactly as he said. 

> I am particularly interested on the statement about probabilistic risk assessment.  I do not much believe in the method because, to me, it seem like it merely reaffirms qualitative judgment and masks that with some assignment of numbers so it can look like mathematics or statistics. 

In this case I would agree that a PRA doesn't necessarily exhibit the reasoning you want to perform, which is to recognise the hazards (rather than ignore them when they are pointed out to you) and mitigate them if possible (such as moving the electronics and emergency generation capability somewhere higher).

> Again, I post this not for deep debate but for interest and to get feedback, perhaps again, on alternatives to a PRA approach.  The various agencies of the U.S. government seem to have faith in PRA and have methods for its employment. 

PRA is a general term, covering all sorts of sins, of which the way the USG uses it may be one (or not :-) ).

There has to be some way of assessing likelihood of hazards. I cross the road without looking. What are the chances, broadly speaking, that a vehicle hits me? I worry about it, and look. Rather than me, what about a child who may not focus on the danger? We talk about it with our children. So do the police safety people, in their school. Do I worry that, when I get in my car, and want to turn the steering wheel, all the atoms in the steering mechanism jump at the moment the other way and steer me instead into the roadside wall? Not really. The difference is the frequency with which these events happen, related to the likelihood with which they may happen. Looking at numbers helps you decide. I don't think we would want to move away from that. 

What we want to move away from are misleading assessments. I would be wrong (I say with some confidence, given the current state of knowledge) to worry about the atoms in my steering all manifesting in the "wrong" direction at once. I would not be wrong to worry about children running into the street in my neighborhood, and how to reduce the speed of transient motorised traffic to mitigate the consequences. 

> I think it has gotten such recognition because it seems a rather simple method and because it is not expensive to undertake. 

There are many methods which fit within the rubric of PRA. I don't think many of them are simple. 
Let me take the commercial-air-transport AMC variety. Talk to people who have had to do the 10^(-9) (or ^10^(-8) or even 10^(-7) ) dance for the FAA and EASA and ask them whether they would consider it "simple"! One might have worries about ETOPS qualification, but no one has yet gone down due to dual engine failure on a twin over the oceans (Air Transat had a go at "no engines over the ocean" in 2001, a month before 9/11, but that is a much more complicated set of circumstances; there was no "engine failure" in the ETOPS sense). On the plus side, no one has had a wing break on a commercial transport in many, many decades, a testimony surely to some success of this approach in the Acceptable Means of Compliance. I am sure the Farrells have volumes more on this than I do, and they are here too. 

The material on Fukushima continues to be provocative. That is likely because of the many special interests involved. When one judges the A380 engine failure to be a "success" (for the airframe, not for the engine), there is a wealth of material on design and procedures and the amount of attention paid to hazards and their mitigation, to alternative control paths and control mechanisms, to justify that judgement. When some academic says that the Fukushima accident was a "success" to a journal, a cursory check on the amount of attention paid to hazards and their mitigation lead to quite a different conclusion, in my view.


>  
> How Safe Is Safe Enough?
>  
> Charles Murray, Senior Technical Editor, Electronics & Test
> 5/6/2013
> Indeed, there’s something cold about it. When we pointed out that the Fukushima Daiichi nuclear powerplant was originally designed for an 8.2-level earthquake a couple of weeks ago, some readers were incensed. Japan, they said, has a long history of earthquakes and its utilities should have been prepared for a 9.0. “Any designer who fails to look at the 100-year environment is failing to meet the canon of ethics,” noted one commenter on our website...........

This comment is surely selected because it is easy to criticise. If someone had said "Any operator who ignores for years, or decades, the information given to them by reknowned seismologists and tsunami experts who were pointing out that the design basis is flawed, is failing.......", the journal could not have left it uncommented, especially since they had proposed that the accident was a "success".

> ”Maybe you’re asking the wrong question,” Muller told us, when we asked how much utilities should have been willing to spend to beef up the Fukushima plant for a 9.0 earthquake. “Instead of asking how much you’re willing to spend, maybe you should ask what to spend it on.”

Neither Professor Muller nor the journal give the obvious answer. They should have spent it on ensuring that the emergency equipment was located where it could not be flooded. As suggested in the 1990's by Dave Lochbaum, and implicit in Charles Perrow's 2007 observation that the design was susceptible to flooding. 

PBL

Prof. Peter Bernard Ladkin, University of Bielefeld and Causalis Limited
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20130507/0e668792/attachment-0001.html>


More information about the systemsafety mailing list