[SystemSafety] Boeing 737 Max Problems with Faulty AoA sensing
Mumaw, Randall J. (ARC-TH)[SAN JOSE STATE UNIVERSITY]
randall.j.mumaw at nasa.gov
Mon Apr 8 18:03:25 CEST 2019
I have also been pushing for a long time the idea that the interface needs to ensure there is flight crew awareness when there are
conflicts across air data sensors. The 777 uses voting to select the "good" data when there are disagreements, but this does not ensure
that the accurate value is the one that gets through.
We have seen more than a few cases of the flight crew chasing bad airspeed data into an upset.
More generally, it seems there was little consideration here about how this failure was presented to the flight crew and how
they were supposed to respond to it. For example, not even ensuring that the flight crew would be aware of sensor disagreements.
These views are contingent on further presentation of the facts of the case, and what, if any, analysis was done.
Randy
Randall J. Mumaw, Ph.D.
NASA Ames Research Center
Mail Stop 262-4
Bldg. 262, Rm. 290-B
P.O. Box 1
Moffett Field, CA 94035-0001
randall.j.mumaw at nasa.gov
randall.mumaw at sjsu.edu
(650) 604-5368 (office)
(206) 852-7405 (mobile)
On 4/8/19, 12:38 AM, "systemsafety on behalf of Peter Bernard Ladkin" <systemsafety-bounces at lists.techfak.uni-bielefeld.de on behalf of ladkin at causalis.com> wrote:
For those not up to speed, so to speak, it turns out that the aircraft in the Ethiopian accident
flight ET 302 also had a faulty AoA sensor reading.
There has been comment about Boeing's internal processes, about the increasing amount of regulatory
design oversight tasked to the company itself rather than retained at the FAA, and about FAA design
oversight itself. One meme that has occurred multiply, in articles in the Seattle Times and
elsewhere, is that the FAA was under time pressure to complete the certification. This was
explicitly addressed by John Hemmerdinger in an article in Flight International, p9, edition of 26
March-1 April:
> The FAA describes the Max's certification as a thorough, five-year process. "We have no reports
> from whistleblowers or any other sources pertaining to FAA technical personnel being pressured to
> speed up certification of the Boeing 737 Max," the agency says.
Five years seems to me to be an adequate amount of time to recertify a modified airframe. Of course,
not if you don't adequately staff the project. I am not familiar with the effort required. Does it
take 50 person-years or 500?
There is a deeper puzzle here for system safety engineering. As well as a deeper worry or two.
The conclusion first: Someone screwed up the FMEA. But it is hard to understand how that might have
happened, as follows.
(1). Pitch control is obviously a flight-critical subsystem, so it needs and will have got an FMEA.
(2). How can you perform an FMEA on pitch control, and not consider faulty sensor input? Faulty
sensor input is the obvious fault class with which to start any FMEA. You can't just miss it out.
(3). If faulty sensor input was considered, then the ETA/Consequence analysis missed an
almost-deterministic consequence of faulty-high AoA. How on earth did that consequence get missed?
Ad 3: A more subtle question posed now twice by Steve Tockey (on a different list). The AoA sensed
value goes into an ADIRU before it gets to the SW. ADIRUs perform some filtering on the data, but
not on all (see QF72). How come the erroneous reading wasn't caught and filtered?
What lessons can be drawn? They are not pretty.
Putting air data into an ADIRU and passing it on to SW is standard stuff on almost any modern
airplane. If it is possible to screw up (2) and/or (3), then one wonders
A. How much of the rest of the FMEAs on this airplane are equally poor? And, thereby, what else lies
in store for the occupants?
B. What other aircraft types flying have undergone an equally-poor FMEA with the air data? Are
there unjustified assumptions being made concerning the processing of air data through ADIRUs?
Airbus, for example, was careful to make known that indeed the QF72 air data spikes had been
considered, but had been judged too improbable to implement prophylaxis. (That has now changed.)
PBL
Prof. Peter Bernard Ladkin, Bielefeld, Germany
MoreInCommon
Je suis Charlie
Tel+msg +49 (0)521 880 7319 www.rvs-bi.de
More information about the systemsafety
mailing list