[SystemSafety] AI in self-driving cars? What are they thinking?

Peter Bernard Ladkin ladkin at causalis.com
Thu Feb 3 11:11:04 CET 2022


Good to see you back and in full rant mode again, Les!

On 2022-02-03 02:11 , Les Chambers wrote:
> 
> For a hair raising update on the current ‘state of the art’ in self driving cars, Google “You tube 
> CNN tests a ‘full self-driving’ Tesla” and hang onto your seat.

There are various CNN videos with "CNN" and "full self driving" and "Tesla". Is this the one you mean?
https://www.youtube.com/watch?v=2PMu7MD9GvI

> I’ll leave it to the list to draw its own conclusions on AI’s utility in driving a land vehicle.

You do need to assume the video is veridical. It may not be. Magicians are very good at getting you 
to focus on one thing so they can construct their tricks without you noticing. Video makers can do 
similarly, and more easily.

You can pull up videos of Waymo's driverless taxi service in Arizona. That really means driverless. 
There is no one sitting at the controls; passenger is in the back seat. There is a guy whose hobby 
is making videos of his trips. Bev brought attention to these privately in May 2021. I think 
https://youtu.be/zdKCQKBvH-A?t=757 is one.

What you don't see is what kind of remote control capability there is. There is clearly some - there 
is always a Waymo rep connected by audio to the vehicle, who responds to customer comments in real 
time and can see and do things with the display screens. They also have chase vehicles, which can 
turn up and give the vehicle a human driver. It is not shown in the videos whether there are chase 
cars in constant visual contact with emergency controls. For all we know, there might be.

Also, we don't actually know that he isn't being encouraged somehow to show everyone "independently" 
how boring and time-wasting all this carefully-implemented technology can be (as contrasted with 
running into lane barriers at speed, like some of its commercial rivals). That is surely a message 
which all commercial autonomous-vehicle outfits would really like to get across to Joe Public 
(whether it is true or not).

>  1. Dangerously immature. AI technology lacks the maturity to be used in any safety critical control
>     loop. It must be banned immediately.

Dear me! No algebraic constraint satisfaction? Fie on you! No SAT solving? Why on earth would you 
want to ban that? People have worked on SAT solvers for over 60 years now, and the best of them are 
now very, very good.

Why would you want to label these "dangerously immature"? Especially since they are not.

No road-sign-recognition or speed-limit-recognition/enforcement technology? They don't have to be 
perfect to make driving a lot safer. Gwynneth Dunwoody's Commons Transportation Committee was 
broaching that 25 years ago, when it was just a dream in someone's eyes. Now it could be done. Mostly.

Ah, but then I think I remember you arguing against adherence to speed limits. On, of all things, 
safety grounds.

Digital CPUs can try to divide by zero, stop, and post an exception condition. Should we thereby ban 
them in commercial airplane control systems?

>  2. Dangerously unregulated. The reality of the road vehicle in the video being on a Brooklyn
>     street, piloted by a journalist, fitted with a beta test version of safety critical software,
>     with the demonstrated dangerous failure rate represents a regulatory failure reminiscent of the
>     FAA and the 737 Max.

Maybe. Or maybe there are safeguards not shown in the video. There could be a remote supervisor 
capable of emergency intervention. Could even be the regulators insisted on one. We don't know.

>  3. Dangerously fanciful hazard reduction assumptions. The notion that ‘a human in the loop’ can
>     compensate for these failures , rendering the system acceptably safe , is fanciful. It is well
>     known that over dependence on a human to diagnose and rectify complex problems in high stress
>     situations in real time ends in tears. 

It is called "supervisory control" and you are right that the HF aspects of supervisory control are 
well known (again, for well over half a century).

But ....

> “Keep your hands on the wheel” ?? Spare me. 

It is certainly not fanciful for a regulator to require that a driver keep her hands on the wheel. 
If she needs to take control, it spares the 1-2 seconds required to stop what she is otherwise 
doing with her hands and put them on the wheel.

It also ensures that you don't have to change from thinking about the next stitch of your crochet to 
thinking about being in a car on a road, because you're not crocheting - your hands are occupied on 
the wheel of the car already.

That, indeed, is one of the things known about supervisory control. There are ways to keep "enough 
engaged" that it enhances the chance of successful intervention. (Note the word "enhances"; not 
"renders perfect"). Keeping your hands on the wheel is one of them.

>  4. Dangerous ignorance of the functional safety engineering discipline. The motor vehicle industry
>     seems to be holding all the past 40 year’s progress in safety critical systems engineering in
>     utter contempt 

Not exactly. Some decade and a half ago I gave a two-day course on IEC 61508 to a group at a large 
automotive component manufacturer. There were some very sharp people who listened patiently to all 
the trivia-mixed-with-key-stuff. And then at the end of each session asked very pointed questions. 
They were most interested in where conceptions from IEC 61508 (which was developed by 
process-industry specialists) would and would not carry over into road vehicle development and 
safety assessment and assurance. They went right to the spots where 61508 would not work for them. 
it was obvious they knew what they were talking about. A couple of years later, out comes ISO 26262, 
and it turns out that some of my interlocutors were prominent developers.

Functional safety and software systems based on machine learning are coming together. There is a 
tech report ISO/IEC TR 5469 which attempts to say how DLNNs and other machine-learning-based SW and 
functional safety may "fit together". It seems to have been on the cusp of being finalised for about 
a year now.

But it is equally true that there are situational-awareness (SitAware) and decision (Dec) systems 
being built into road vehicle control which are designed and implemented by people who are not 
functional safety specialists. Management is trying to separate concerns, and you can either spend 
your career designing and training NNs or you can spend it doing FMEA and HazAn; like the choice of 
any speciality, it does end up being exclusive, although there is always hope that a short course in 
"the other" relevant discipline will increase sensitivity to it.

There is a similar situation with functional safety and cybersecurity. There are very few people who 
are adequately expert in both. There a lots more who are either/or.

Comparable to your complaint that "AI-technology people" don't know much about functional safety, 
let me ask in turn how much *you* know about supervised and unsupervised machine learning, 
reinforcement learning, and DLNNs? Do you know about the experiments NASA did in the 1990's in the 
Propulsion-Controlled Aircraft and Intelligent Flight Control System projects? Pioneering ML+FS 
projects.

> For example, Stuart Russell
>     gave last year’s BBC Reith Lecture “Living With Artificial Intelligence” (available on line -
>     and highly recommended. Stuart is a brilliant speaker.) 

Let me second your recommendation.

Stuart signed my thesis. The first one he signed in his then-new job at Berkeley in his late 
twenties. Since then, he's signed a few more :-)

> Why is such a hard won and highly effective engineering discipline being disrespected so? 

I think there is a general problem here of integrating SitAware+Dec technology, practiced by people 
largely coming out of a highly competitive computer-science move-fast-and-break-things background 
with the conservative engineering move-slow-and-don't-break-anything discipline of functional 
safety. I don't guess that this contrast will be resolved quickly.

Notice it has been resolved in the past in particular cases. Uber went on to SF streets with some 
questionable technology and the California DMV reacted, since Uber hadn't gone through the 
regulatory steps which DMV wanted to impose. Uber tried to put two fingers up at the DMV and lost. 
Uber is now out of self-driving vehicle technology. You could say that traditional road safety won 
that particular contretemps.

Waymo is going very carefully with its driverless taxi service. You notice it isn't in the news at 
all, whereas Uber hitting Elaine Hertzberg in Tempe was all over the world in minutes to hours.

Also, do you ever hear about Aurrigo's pods in Milton Keynes? I don't think they hurt anyone either.

Tesla, as you point out, seems to be having people hurt and even dying while its cars are under 
semi-automatic control. There is a general issue of how the company is managing to do that without 
incurring the wrath of regulators. I have little to no insight.

>  1. HUBRIS surrounding electric cars in general 

Well, yes. But then, the UKautodrive project seems to have been as successful as it was designed to 
be. No hubris there.

> Note that within the next 5 - 10 years the planet will move on its axis under the weight of the 
> greatest volume of cash flow ever experienced as Apple announces it’s ‘autonomous self-driving 
> vehicle.’ 

Now there's another one. Apple have been working on autonomous vehicle technology for a long time 
(well, what counts as a long time? The DARPA Grand Challenge was only 18 years ago, and none of the 
vehicles completed the course. The next year, lots of them did and Thrun and SAIL won it with 
Stanley). Yet you speak of it in the future tense. The point being that Apple's development of this 
technology is kept well out of the news. It is not going around running into lane separators or 
driving its occupants under trucks.

>  1. THE TECHNOLOGY TAIL WAGS THE PROCESS DOG. 

Well, yes. There are certainly some "players" who give that impression. But contrast Waymo, Apple 
and Aurrigo.

>  2. NEURAL NET IS A BAD METAPHOR IN THE SAFETY CRITICAL CONTEXT. 

I think you are way off base here. Nobody is trying to mimic human brain capability and then put 
that in cars. Computer chess programs first started to beat grandmasters when they stopped trying to 
mimic human decision processes and devised their own. That process is well-described in Hsu's book 
Behind Deep Blue (Princeton U.P. 2002).

Similarly, the SitAware+Dec SW modules in wannabe-autonomous vehicles are, to my knowledge, not 
trying to mimic human capabilities.


> SOLUTIONS
> 
> I know I’m barking at the moon here. AIs will continue to be deployed in control systems regardless 
> of my objections UNLESS regulators wake up to the seriousness of this situation and:
> 
> DO THEIR JOB AND REGULATE!

Well, they do. Witness Uber in California.

> PROHIBIT THE DEPLOYMENT OF ANY AI THAT CANNOT BE VALIDATED IN A SAFETY CRITICAL CONTROL SYSTEM
> 
> Right now this means all of them.

Nope, not by a long way. If you want to put a SAT solver in a critical system, you can, without 
worries. It is trivial to validate the output of a SAT solver. An obvious candidate for run-time 
verification.

In conclusion, let me recommend to functional safety specialists to learn about AI technology, just 
as Les is suggesting AI technologists would do well to learn about FS :-)

PBL

Prof. i.R. Dr. Peter Bernard Ladkin, Bielefeld, Germany
Tel+msg +49 (0)521 880 7319  www.rvs-bi.de




-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature
Type: application/pgp-signature
Size: 840 bytes
Desc: OpenPGP digital signature
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20220203/ecf3f0da/attachment.sig>


More information about the systemsafety mailing list