[SystemSafety] AI in self-driving cars? What are they thinking?

Les Chambers les at chambers.com.au
Thu Feb 3 02:11:53 CET 2022


Hi All
For a hair raising update on the current ‘state of the art’ in self driving cars, Google “You tube CNN tests a ‘full self-driving’ Tesla” and hang onto your seat.
I’ll leave it to the list to draw its own conclusions on AI’s utility in driving a land vehicle.
Here are mine:
Dangerously immature. AI technology lacks the maturity to be used in any safety critical control loop. It must be banned immediately.
Dangerously unregulated. The reality of the road vehicle in the video being on a Brooklyn street, piloted by a journalist, fitted with a beta test version of safety critical software, with the demonstrated dangerous failure rate represents a regulatory failure reminiscent of the FAA and the 737 Max.
Dangerously fanciful hazard reduction assumptions. The notion that ‘a human in the loop’ can compensate for these failures , rendering the system acceptably safe , is fanciful. It is well known that over dependence on a human to diagnose and rectify complex problems in high stress situations in real time ends in tears. All our profession has learned about avoiding cognitive overload has been ignored. “Keep your hands on the wheel” ?? Spare me.  In my work on intelligent traffic management  systems I came to understand how seriously road authorities take the act of shutting down a freeway. They avoid it where possible as drivers lapse into a somnambulistic state at constant traffic flow rates in cruise control with no episodes requiring evasive action. In the time it takes to wake up, detect stationary vehicles ahead and apply the brake a vehicle can travel a substantial distance at freeway speeds. Hence ‘back of queue’ collisions are common.  Even in this environment, a best case for auto pilots, I don’t don’t relish the thought of driving a car that behaves like a teenage learner driver.
Dangerous ignorance of the functional safety engineering discipline. The motor vehicle industry seems to be holding all the past 40 year’s progress in safety critical systems engineering in utter contempt (the actors developing these systems may have been unaware of it in the first place). Computer science is being deployed in these vehicles directly without the necessary detour through systems engineering (probably because no AI specific safety engineering process exists as yet). The developers are dosed up on texts such as Russell & Norvig, Artificial Intelligence - A Modern Approach; all science and no engineering. For example, Stuart Russell gave last year’s BBC Reith Lecture “Living With Artificial Intelligence” (available on line - and highly recommended. Stuart is a brilliant speaker.) One of his stated problems is, “We don’t know how to specify an AI yet”. Spoken like a true computer scientist (which is what he is). An engineer would have taken the proposition a step further, “… therefore we are unable to validate an AI.”

Why is such a hard won and highly effective engineering discipline being disrespected so? I offer some root causes:
HUBRIS surrounding electric cars in general and the BISC factor in particular.
(BISC: but it’s sooo cool - usually uttered with forearms raised, fists clenched and moving laterally at high frequency). 
The systemic failure modes triggered by hubris are:
A. JUST DO IT! Under pressure from innovation crazed governments, underfunded and technologically under powered regulators look the other way.  Transparently unsafe designs are visited on the public.
B. IGNORANCE! Developers don’t look at all - no one told them they had to. Well understood safety engineering processes routinely used in parallel universes (aviation, rail, chemical processing) are not applied.
Note that within the next 5 - 10 years the planet will move on its axis under the weight of  the greatest volume of cash flow ever experienced as Apple announces it’s ‘autonomous self-driving vehicle.’ You have a MacBook Pro, an iPad, an Apple Watch; you must need an Apple car. Kevin Lynch, Apple’s current software development manager has no safety critical software engineering credentials. He cut his teeth on Adobe Dreamweaver and the Apple Watch. Ask him for his hazard log. “Say what boy?” 
THE TECHNOLOGY TAIL WAGS THE PROCESS DOG. Tesla is a case study of an attempt to fit a development process to a technology that is not fit for purpose. Immature neural network technology renders the auto’s environment only partially observable. Further, establishing truth in sensing is stochastic rather than deterministic. There are no deterministic switch contacts, temperatures, pressures, flows only a probability you are heading for a telephone pole or an on-coming UPS van (see video). Picture yourself in a train control system design review redolent with hard core players expecting failure stats in the 10^-6 to 10^-9 range and you turn up with an, “Aw shucks we’re 80% sure our packed 20 carriage passenger train is NOT about to travel through a stop signal. Let’s deploy the software in every day use and see if it works.” You’d be laughed out of the room.  Yet the new age auto industry players seem to think this is ok. Tesla offers a not fully validated version of self-driving as an expensive standard option. Have we got another “aw shucks” here? “Hey it sorta, kinda works but keep your hands on the wheel just in case.” Auto industry engineering standards seem to have backslid by orders of magnitude through rank ignorance. The whole concept of system validation has been ditched! The regulators seem to be asleep at the wheel.
NEURAL NET IS A BAD METAPHOR IN THE SAFETY CRITICAL CONTEXT. Attempting to model the human brain’s capability for situational awareness with a neural net is a bridge too far. It reminds me of the early stages of manned flight where we had to get rid of the notion that aircraft wings had to flap. The breakthrough came with the realisation that the aerofoil was the core area for study. We need to find the analog of the aerofoil in brain simulation. That is, find metaphors that inform embedded pattern recognition AI designs that are deterministic rather than  stochastic. This will take a while. Some say it’s in the same bucket as nuclear fusion; thirty years away and always will be.

SOLUTIONS
I know I’m barking at the moon here. AIs will continue to be deployed in control systems regardless of my objections UNLESS regulators wake up to the seriousness of this situation and:
DO THEIR JOB AND REGULATE!
PROHIBIT THE DEPLOYMENT OF ANY AI THAT CANNOT BE VALIDATED IN A SAFETY CRITICAL CONTROL SYSTEM
Right now this means all of them.

PS:
If you’ve detect a level of passion in the above you’d be RIGHT ON! Call me a relic of 20th century life critical engineering, but I’ve seen this all before. In an accident of history when I graduated from university I fell in with a bunch of American chemical engineers who had just commenced to control chemical reaction kinetics directly with software. PDP-8s, assembler language, no code reviews, no hazard analysis, control programs tested on the plant (no off-line testing). All this in the the midst of the most stringent and effective safety culture I have ever experienced in a 46 year engineering career. How did we get away with it?
BISC + senior management were computer illiterate, they had no concept of the risks we were taking. Frankly neither did we. Functional safety engineering did not exist circa 1970. Personally I was having a fun time applying the advanced control algorithms that computers enabled. Our work was considered a black art. People pointed me out at company functions. “Oh so that’s the wizard.” Above all IT WAS SO COOL!
We got results. Product quality was up. Plant down time was down.
There were no safety incidents traceable to computer control. A combination of good management and good luck. Some near misses were swept under the carpet.
THEN … one of our number went too far! Having lined out the plant-embedded direct digital control system we started networking a supervisory computer for production reporting. You could also implement a more user-friendly plant operator interface with these machines.
THEN … we attached modems to these computers so the plant engineer could monitor plant performance in real time from his living room.
BUT … in the best BISC tradition it was also possible to change controller set points from your living room.
SO pleased with himself was the actor who routinely did this, he presented his brilliance to the company’s crusty old corporate engineering manager
AND was told to “STOP THAT NOW BOY!”
It turns out that safe operation of a chemical plant requires on-site observibility of plant status.
REGULATION HAD FINALLY ARRIVED to curb the excesses of me and my BISCy mates.

Abstracting this experience and projecting it onto the auto industry of 2022:
Auto industry regulators seem to have no visibility of the risks companies like Tesla are taking with human life. Risks that would be unacceptable in most other industry sectors.
Self-driving system developers are as ignorant of safe software engineering process as was my cohort circa 1970. Our excuse was: “aw shucks it was not codified then.” WELL IT CERTAINLY IS NOW!
Recent self driving fatalities indicate the problem is worse than it was back then. In those days just proximity to a death at work would end your career. Fast forward to the recent past and you witness Elon getting away with responses to Tesla  crashes such as, “There are 1.2 million automotive deaths per year …” essentially, “s**t happens. Suck it up.” The narrative people like Elon are attempting to promulgate is “banning my software crushes innovation … some deaths are unavoidable (read acceptable).”  The professional engineer’s response (in case anyone has forgotten) is: THE ENGINEER’S DUTY IS TO APPLY SCIENCE FOR THE BENEFIT OF MANKIND. THE ONLY ACCEPTABLE DEATH RATE ATTRIBUTABLE TO A HUMAN ENGINEERED SYSTEM IS ZERO. Tesla owners are not in your army Elon. They did not sign up to die for Tesla motors.

One characteristic of my 1970s control systems which does not project onto the present is visibility. Back in the day it was excellent. The promise of more precise control triggered a step change in spending on highly deterministic instrumentation and final control elements. Thirty percent of total plant capital cost was a good rule of thumb. 
In contrast, the visibility in so-called state of the art auto pilots is so bad they might as well be blind. Yet regulators continue to allow the unsuspecting public to drive these vehicles with software capable of killing them in the doesn’t-quite-work-yet state common in beta test. WHAT!!

SO I ask you, as with Diogenes the Cynic, who went around the sunlit streets of Athens, lantern in hand, looking for an honest man; where can I find a regulator with the integrity, passion and courage to grab Musk and his ilk by the lapels and direct them to
STOP THAT NOW BOY!

Thank you 
Les

Managing Director
Chambers & Assoc. Pty Ltd.    
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20220203/a47b8bab/attachment-0001.html>


More information about the systemsafety mailing list