[SystemSafety] AI in self-driving cars? What are they thinking?

Phil Koopman koopman.cmu at gmail.com
Thu Feb 3 13:08:22 CET 2022


Les,

Thanks for weighing in on this topic. I too have many concerns in the 
areas you discuss. In many regards I have a front row seat to how this 
sausage is getting made. What I can say is that there are real issues to 
address, and that the situation is complicated -- as you'd expect with 
tens of billions of dollars being spend to chase a trillion dollar 
market opportunity.

Tesla stands out from the other companies in terms of their road 
"testing" strategy and exploitation of the SAE J3016 Level 2 Loophole to 
evade regulation that I argue should apply to FSD beta. There is broad 
range of how serious these issues might be with other companies, 
including a range of staffing capability with regard to functional 
safety. Transparency is severely lacking, so it is difficult to know 
what's really going on inside many of these companies unless you're an 
insider.

I recorded a video last week that goes into the current situation. It is 
more about the regulatory issues than the technical issues, but I have 
other materials that go over a broader view including technical issues 
for anyone who is interested in more details.  I lack the time for 
extended discourse on this point right now since, currently, I'm engaged 
in a number of areas trying to improve the situation, including trying 
to improve truly disappointing legislation in my home state.

Latest talk (slides, 33 minute video):
https://safeautonomy.blogspot.com/2022/01/trust-governance-for-autonomous-vehicle.html

For those who prefer a substantive read to a video:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3969214

Multi-hour tutorial on the relevant issues including technology:
https://safeautonomy.blogspot.com/2021/06/software-safety-for-vehicle-automation.html

Many papers and talks over the last several years on this topic at my 
CMU home page:
https://users.ece.cmu.edu/~koopman/index.html

Recent UK law commission on this topic looks promising:
https://www.lawcom.gov.uk/project/automated-vehicles/

For those who want to keep up on AV safety, my LinkedIn posts might be 
helpful.

Kind regards,
Phil


On 2/2/2022 8:11 PM, Les Chambers wrote:
>
> Hi All
>
> For a hair raising update on the current ‘state of the art’ in self 
> driving cars, Google “You tube CNN tests a ‘full self-driving’ Tesla” 
> and hang onto your seat.
>
> I’ll leave it to the list to draw its own conclusions on AI’s utility 
> in driving a land vehicle.
>
> Here are mine:
>
>  1. Dangerously immature. AI technology lacks the maturity to be used
>     in any safety critical control loop. It must be banned immediately.
>  2. Dangerously unregulated. The reality of the road vehicle in the
>     video being on a Brooklyn street, piloted by a journalist, fitted
>     with a beta test version of safety critical software, with the
>     demonstrated dangerous failure rate represents a regulatory
>     failure reminiscent of the FAA and the 737 Max.
>  3. Dangerously fanciful hazard reduction assumptions. The notion that
>     ‘a human in the loop’ can compensate for these failures ,
>     rendering the system acceptably safe , is fanciful. It is well
>     known that over dependence on a human to diagnose and rectify
>     complex problems in high stress situations in real time ends in
>     tears. All our profession has learned about avoiding cognitive
>     overload has been ignored. “Keep your hands on the wheel” ?? Spare
>     me. In my work on intelligent traffic management systems I came to
>     understand how seriously road authorities take the act of shutting
>     down a freeway. They avoid it where possible as drivers lapse into
>     a somnambulistic state at constant traffic flow rates in cruise
>     control with no episodes requiring evasive action. In the time it
>     takes to wake up, detect stationary vehicles ahead and apply the
>     brake a vehicle can travel a substantial distance at freeway
>     speeds. Hence ‘back of queue’ collisions are common. Even in this
>     environment, a best case for auto pilots, I don’t don’t relish the
>     thought of driving a car that behaves like a teenage learner driver.
>  4. Dangerous ignorance of the functional safety engineering
>     discipline. The motor vehicle industry seems to be holding all the
>     past 40 year’s progress in safety critical systems engineering in
>     utter contempt (the actors developing these systems may have been
>     unaware of it in the first place). Computer science is being
>     deployed in these vehicles directly without the necessary detour
>     through systems engineering (probably because no AI specific
>     safety engineering process exists as yet). The developers are
>     dosed up on texts such as Russell & Norvig, Artificial
>     Intelligence - A Modern Approach; all science and no engineering.
>     For example, Stuart Russell gave last year’s BBC Reith Lecture
>     “Living With Artificial Intelligence” (available on line - and
>     highly recommended. Stuart is a brilliant speaker.) One of his
>     stated problems is, “We don’t know how to specify an AI yet”.
>     Spoken like a true computer scientist (which is what he is). An
>     engineer would have taken the proposition a step further, “…
>     therefore we are unable to validate an AI.”
>
>
> Why is such a hard won and highly effective engineering discipline 
> being disrespected so? I offer some root causes:
>
>  1. HUBRIS surrounding electric cars in general and the BISC factor in
>     particular.
>
> (BISC: but it’s sooo cool - usually uttered with forearms raised, 
> fists clenched and moving laterally at high frequency).
>
> The systemic failure modes triggered by hubris are:
>
> A. JUST DO IT! Under pressure from innovation crazed governments, 
> underfunded and technologically under powered regulators look the 
> other way. Transparently unsafe designs are visited on the public.
>
> B. IGNORANCE! Developers don’t look at all - no one told them they had 
> to. Well understood safety engineering processes routinely used in 
> parallel universes (aviation, rail, chemical processing) are not applied.
>
> Note that within the next 5 - 10 years the planet will move on its 
> axis under the weight of the greatest volume of cash flow ever 
> experienced as Apple announces it’s ‘autonomous self-driving vehicle.’ 
> You have a MacBook Pro, an iPad, an Apple Watch; you must need an 
> Apple car. Kevin Lynch, Apple’s current software development manager 
> has no safety critical software engineering credentials. He cut his 
> teeth on Adobe Dreamweaver and the Apple Watch. Ask him for his hazard 
> log. “Say what boy?”
>
>  1. THE TECHNOLOGY TAIL WAGS THE PROCESS DOG. Tesla is a case study of
>     an attempt to fit a development process to a technology that is
>     not fit for purpose. Immature neural network technology renders
>     the auto’s environment only partially observable. Further,
>     establishing truth in sensing is stochastic rather than
>     deterministic. There are no deterministic switch contacts,
>     temperatures, pressures, flows only a probability you are heading
>     for a telephone pole or an on-coming UPS van (see video). Picture
>     yourself in a train control system design review redolent with
>     hard core players expecting failure stats in the 10^-6 to 10^-9
>     range and you turn up with an, “Aw shucks we’re 80% sure our
>     packed 20 carriage passenger train is NOT about to travel through
>     a stop signal. Let’s deploy the software in every day use and see
>     if it works.” You’d be laughed out of the room. Yet the new age
>     auto industry players seem to think this is ok. Tesla offers a not
>     fully validated version of self-driving as an expensive standard
>     option. Have we got another “aw shucks” here? “Hey it sorta, kinda
>     works but keep your hands on the wheel just in case.” Auto
>     industry engineering standards seem to have backslid by orders of
>     magnitude through rank ignorance. The whole concept of system
>     validation has been ditched! The regulators seem to be asleep at
>     the wheel.
>  2. NEURAL NET IS A BAD METAPHOR IN THE SAFETY CRITICAL CONTEXT.
>     Attempting to model the human brain’s capability for situational
>     awareness with a neural net is a bridge too far. It reminds me of
>     the early stages of manned flight where we had to get rid of the
>     notion that aircraft wings had to flap. The breakthrough came with
>     the realisation that the aerofoil was the core area for study. We
>     need to find the analog of the aerofoil in brain simulation. That
>     is, find metaphors that inform embedded pattern recognition AI
>     designs that are deterministic rather than stochastic. This will
>     take a while. Some say it’s in the same bucket as nuclear fusion;
>     thirty years away and always will be.
>
>
> SOLUTIONS
>
> I know I’m barking at the moon here. AIs will continue to be deployed 
> in control systems regardless of my objections UNLESS regulators wake 
> up to the seriousness of this situation and:
>
> DO THEIR JOB AND REGULATE!
>
> PROHIBIT THE DEPLOYMENT OF ANY AI THAT CANNOT BE VALIDATED IN A SAFETY 
> CRITICAL CONTROL SYSTEM
>
> Right now this means all of them.
>
>
> PS:
>
> If you’ve detect a level of passion in the above you’d be RIGHT ON! 
> Call me a relic of 20th century life critical engineering, but I’ve 
> seen this all before. In an accident of history when I graduated from 
> university I fell in with a bunch of American chemical engineers who 
> had just commenced to control chemical reaction kinetics directly with 
> software. PDP-8s, assembler language, no code reviews, no hazard 
> analysis, control programs tested on the plant (no off-line testing). 
> All this in the the midst of the most stringent and effective safety 
> culture I have ever experienced in a 46 year engineering career. How 
> did we get away with it?
>
>  1. BISC + senior management were computer illiterate, they had no
>     concept of the risks we were taking. Frankly neither did we.
>     Functional safety engineering did not exist circa 1970. Personally
>     I was having a fun time applying the advanced control algorithms
>     that computers enabled. Our work was considered a black art.
>     People pointed me out at company functions. “Oh so that’s the
>     wizard.” Above all IT WAS SO COOL!
>  2. We got results. Product quality was up. Plant down time was down.
>  3. There were no safety incidents traceable to computer control. A
>     combination of good management and good luck. Some near misses
>     were swept under the carpet.
>
> THEN … one of our number went too far! Having lined out the 
> plant-embedded direct digital control system we started networking a 
> supervisory computer for production reporting. You could also 
> implement a more user-friendly plant operator interface with these 
> machines.
>
> THEN … we attached modems to these computers so the plant engineer 
> could monitor plant performance in real time from his living room.
>
> BUT … in the best BISC tradition it was also possible to change 
> controller set points from your living room.
>
> SO pleased with himself was the actor who routinely did this, he 
> presented his brilliance to the company’s crusty old corporate 
> engineering manager
>
> AND was told to “STOP THAT NOW BOY!”
>
> It turns out that safe operation of a chemical plant requires on-site 
> observibility of plant status.
>
> REGULATION HAD FINALLY ARRIVED to curb the excesses of me and my BISCy 
> mates.
>
>
> Abstracting this experience and projecting it onto the auto industry 
> of 2022:
>
>  1. Auto industry regulators seem to have no visibility of the risks
>     companies like Tesla are taking with human life. Risks that would
>     be unacceptable in most other industry sectors.
>  2. Self-driving system developers are as ignorant of safe software
>     engineering process as was my cohort circa 1970. Our excuse was:
>     “aw shucks it was not codified then.” WELL IT CERTAINLY IS NOW!
>  3. Recent self driving fatalities indicate the problem is worse than
>     it was back then. In those days just proximity to a death at work
>     would end your career. Fast forward to the recent past and you
>     witness Elon getting away with responses to Tesla crashes such as,
>     “There are 1.2 million automotive deaths per year …” essentially,
>     “s**t happens. Suck it up.” The narrative people like Elon are
>     attempting to promulgate is “banning my software crushes
>     innovation … some deaths are unavoidable (read acceptable).” The
>     professional engineer’s response (in case anyone has forgotten)
>     is: THE ENGINEER’S DUTY IS TO APPLY SCIENCE FOR THE BENEFIT OF
>     MANKIND. THE ONLY ACCEPTABLE DEATH RATE ATTRIBUTABLE TO A HUMAN
>     ENGINEERED SYSTEM IS ZERO. Tesla owners are not in your army Elon.
>     They did not sign up to die for Tesla motors.
>
>
> One characteristic of my 1970s control systems which does not project 
> onto the present is visibility. Back in the day it was excellent. The 
> promise of more precise control triggered a step change in spending on 
> highly deterministic instrumentation and final control elements. 
> Thirty percent of total plant capital cost was a good rule of thumb.
>
> In contrast, the visibility in so-called state of the art auto pilots 
> is so bad they might as well be blind. Yet regulators continue to 
> allow the unsuspecting public to drive these vehicles with software 
> capable of killing them in the doesn’t-quite-work-yet state common in 
> beta test. WHAT!!
>
>
> SO I ask you, as with Diogenes the Cynic, who went around the sunlit 
> streets of Athens, lantern in hand, looking for an honest man; where 
> can I find a regulator with the integrity, passion and courage to grab 
> Musk and his ilk by the lapels and direct them to
>
> STOP THAT NOW BOY!
>
>
> Thank you
>
> Les
>
>
> Managing Director
> Chambers & Assoc. Pty Ltd.
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription:https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety

-- 
Prof. Phil Koopmankoopman at cmu.edu    
(he/him)https://users.ece.cmu.edu/~koopman/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20220203/c0ff5905/attachment-0001.html>


More information about the systemsafety mailing list