[SystemSafety] NTSB report on Boeing 787 APU battery fire at Boston Logan

Mike Ellims michael.ellims at tesco.net
Fri Dec 5 11:52:45 CET 2014


Matthew Squair wrote:

 

“The fruitful question IMO is what were the organizational, regulatory and cognitive factors were at work to make them disregard it?”.

 

I suspect that this observation may be key in a number of ways and is central to a number of high profile hazards/accidents we have seen this year aside from  this particular battery issue. To be specific I’m thinking of the following;

 

1.       The issue with the ignition key at GM.

2.       The issue with the emergency lubrication system on Super Puma helicopters.

3.       The problems with the Toyota engine control system.

 

At some point in the all of the chains of reasoning one or more errors has crept in and wasn’t caught. At one level I suspect that either  the intuitional procedures are unable to either catch and/or deal with it, or institutional inertia has made it too difficult to control. As the organization involved gets larger; these factors possibly get worse. For instance at one point in my career I was nominally a “minion” (think cute and yellow) of the Ford Motor Company. I even had a ford DOT com email address – it never worked, I could never use it to access to Fords’ SAE library and nothing I did seemed to be able to change that and I stopped trying.

 

Laid on top of that again are all the issues dealing with developments that cross a number of company and cultural boundaries where there is certain to be some confusion as to who is responsible for what. For example contractually the battery supplier may have had the responsibility to do the safety analysis at the cell level and this analysis was only ever cursorily checked at the next level up (note I haven’t read the full report yet).

 

Laid on top of this are all the messy issues associated with cognitive factors and human nature as pointed out by Peter Bishop’s link to the paper by Rae, McDermid, Alexander and Nicholson. For example suspect that in general people tend to trust each other and as a self selected group of people, engineers probably tend to be more trustful/helpful than the norm if for no other reason than they have to work in teams and thus get along with each other (OK well mostly). Thus is possible that in reviewing the FMEA for the battery that there was an unintended  mindset of “they have done batteries for X years, they have made zillions of them, they know what they are talking about”. Which in turn lead to a less rigorous review process but it’s hard to know for certain.

 

Another issue is that as a group engineers are primed for success, that is we want what to build something. For example there definitely exists a mind set in some engineers I’ve encountered of “we tested it so we have proved it’s safe”. People in general don’t seem to be very good a considering failure.

 

Thus I strongly suspect that our analysis, control and QA systems may be far more fragile than we expect. One idea I took from “The Limits of Expertise” which examines a number of aviation accidents in detail, is that in many cases there is actually a lot less redundancy in a system than we expect.

 

These are major issues that as a group we don’t tend to think about very much but are possibly the dominant case of accidents. Currently they  are an active area of research in medicine where we see many similar issues. Interestingly research suggests surgeons who expect things to go wrong and plan for failure have much higher success rates.

 

As an aside; this list is of course a self selected group  that is representative of the group of people that wouldn’t trust the change given to them by an ice-cream vendor. As such, I suspect that we may not make the same mistakes, but I wouldn’t guarantee it. I’ve been monitoring myself treat my son now for over a year and have recorded every minor glitch – disturbingly there are quite a lot of them and I only seem to be able to minimize their number, not eliminate them. So far none have lead to an incident but I shall be quite relived when we can ditch aseptic procedures.

 

Cheers.

 

 

From: Matthew Squair [mailto:mattsquair at gmail.com] 
Sent: 04 December 2014 20:56
To: Peter Bernard Ladkin
Cc: Mike Ellims; The System Safety List
Subject: Re: [SystemSafety] NTSB report on Boeing 787 APU battery fire at Boston Logan

 

As you both point out there's plenty of information out there. The fruitful question IMO is what were the organisational, regulatory and cognitive factors were at work to make them disregard it? Plus they (the 787 team) and the FAA completely misunderstood the value and limitations of testing in their specific context. 

 

I wrote a bit on where I thought they'd lost the plot when the interim report came out. In fact I wrote quite a bit, as it's an excellent case study of experimenters regress amongst other things, so perhaps we should really thank the players*?

 

http://criticaluncertainties.com/2013/02/12/boeings-lithium-woes-pt-ii/#more-6333. 

 

*As no one was hurt.



Matthew Squair

 

MIEAust, CPEng

Mob: +61 488770655

Email; Mattsquair at gmail.com

Web: http://criticaluncertainties.com


On 5 Dec 2014, at 3:11 am, Peter Bernard Ladkin <ladkin at rvs.uni-bielefeld.de> wrote:



On 2014-12-04 16:44 , Mike Ellims wrote:



So could the assumption have been validated as it should have been?

 

Simplest way to test this is to do a literature search.


You didn't mention Linden's Handbook of Batteries, ed. Reddy, McGraw-Hill, 4th Edition 2011, Ist
edition 1984. Then there's Ley and Bro, Battery Hazards and Accident Prevention, Springer, 1994.
Then there is Daniel and Besenhard, Handbook of Battery Materials, Wiley VCH, 2nd edition 2011,
first edition 1999. As far as I can tell, they are required items on the bookshelf of any battery
person and they are on ours too. All of them deal with the issue of thermal runaway of lithium
batteries, and the first two with how this is affected by design. Then there are the many annual
conferences on batteries.

Or you can simply ask people who know. Like, any one of those authors of the thousands of papers,
some of whom are very distinguished scientists indeed, as well as being well-known.

PBL

Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany
Tel+msg +49 (0)521 880 7319  www.rvs.uni-bielefeld.de




_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE



---
This email is free from viruses and malware because avast! Antivirus protection is active.
http://www.avast.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20141205/4fb775cd/attachment-0001.html>


More information about the systemsafety mailing list