[SystemSafety] Technical information on Airbus A320 recall?

Les Chambers les at chambers.com.au
Sun Nov 30 22:54:05 CET 2025


Brian 
Furious agreement.
While we're on this subject, it's worth noting that complex systems can be 
subject not only to bit flips but also to bit blasts. I had firsthand 
experience of this in a tunnel project on one of Australia's main East Coast 
freeways. From memory, the chain of events was as follows:
A faulty fibre optic network connection caused periodic failures that in turn, 
caused a supervisory computer, not programmed for bulletproof seamless network 
recovery, to blast a programmable logic controller with a series of random 
bits one of which was the freeway shut-down control. The result: a main 
traffic artery was shut down twice in two weeks. We were lucky we did not have 
any back-of-the-queue collisions. The programmable defence against this 
problem is well known in some contexts, for example, defence. Weapons systems 
never launch a missile with a single bit; always a bit pattern, 16 bits is 
good, 64 bits is better. 
The lessons learned from this incident caused me to add a few questions to my 
safety audit list.

1.Command Validation: Do you implement Safety-Critical triggers using complex 
bit patterns (masks or unique signatures) rather than single boolean flags, to 
ensure random noise cannot simulate a valid command?

2. Input Plausibility: Since nature abhors step changes, do you implement 
"physical plausibility" logic? (i.e., filtering out sensor inputs that show 
microsecond jumps impossible for physical variables like temperature or flow).

3. Output Sanity: How do you guard against "unnatural" step changes in 
controller outputs? Do you employ slew-rate limiting to ensure the controller 
doesn't demand physically damaging responses?

My understanding is that an A320 experienced an uncommanded, limited 
pitch‑down / altitude drop that required an emergency diversion, which 
indicates an "unnatural" step change in a controller output. I'm surprised 
that this could happen in aviation, which is typically the gold standard in 
Safety-Critical systems design. I guess that in the fullness of time, all 
secrets will be revealed.

Cheers
Les

> On 30/11/2025 14:07, Prof. Dr. Peter Bernard Ladkin wrote:
> > Which doesn't help answer David's question of how a pure *software* 
> > change can affect SEE subsceptibility.
> 
> Software cannot prevent SEE it can defend against the effects of an 
> event.  I have used defensive programming extensively to provide 
> resilience in single channel systems.  To me the more interesting 
> question is why is the flight control system susceptible to a single 
> event.  Hardware redundancy is usually the protection against this type 
> of potential failure.
> 
> Brian Jepson



--
Les Chambers
les at chambers.com.au

https://www.chambers.com.au
https://www.systemsengineeringblog.com

+61 (0)412 648 992




More information about the systemsafety mailing list