[SystemSafety] Nature article on driverless cars

Peter Bernard Ladkin ladkin at causalis.com
Mon Apr 16 10:46:40 CEST 2018


On 2018-04-16 09:19 , Michael Jackson wrote:
> The article argues that AV cannot succeed except in very limited contexts. Can a car that has been honing its skills on California highways be shipped to Manhattan and put to work as a taxi there? Is what has been learned in California compatible with what would be needed in Manhattan? 

I don't read that they are arguing that AV cannot succeed except in a limited way. I do read that
the technology and its social context (liability) is nowhere near as mature as is assumed in current
regulatory guidance for testing, which is a weaker point.

However, I did overlook an answer to my implicit question about the argument. It occurs in the
section on liability, but then leads into a couple of points which seem to me to be peripheral.

[begin quote]

Worse, deep-learning algorithms are inherently unpredictable. They are built on an opaque
decision-making process that is shaped by previous experiences. Each car will be trained
differently. No one — not even an algorithm’s designer — can know precisely how an autonomous car
will behave under every circumstance.

No law specifies how much training is needed before a deep-learning car can be deemed safe, nor what
that training should be. Cars from different manufacturers could react in contrasting ways in an
emergency. One might swerve around an obstacle; another might slam on the brakes. Rare traffic
events, such as a truck tipping over in the wind, are of particular concern and, at best, make it
difficult to train driverless cars.

Advanced interfaces are needed that inform users why an autonomous vehicle is behaving as it does.....

[end quote]

The key point for me is the first sentence, that DLNNs are inherently unpredictable. But they don't
go on to say "so we cannot tell if they are working well or not."

They go on to say that "no law specifies how much training is needed...." and that "Advanced
interfaces are needed..." Both points correct, but weak. The issue is way more than a gap in the
law, resp. a lack of development of interfaces. It is that engineering cannot say how much training
is needed, in what contexts, and provide reliable criteria for assessment.

Assessment is an open technical problem and we have every reason to think it won't be solved soon.
For every other field of SW-based safecrit systems in which the public is a stakeholder, there are
some rigorous assessment criteria (even if many of us worry that they are almost always inadequate).
For example, IEC 61508 requires almost sixty different documents for safety-related SW. Most of them
could not plausibly be written, or would be useless, for DLNN-driven autonomous vehicles running on
public roads.

To repeat what I have written earlier, aerospace has been working on assessment of DLNNs for at
least two decades, and has not got very far, in an operating environment I would claim is far
simpler. The counter to that would be that the aerospace work is resource-limited and there is
substantially more effort being put in to AVs because the potential market is far larger. The answer
to which is to ask what the results have been and what the assessment criteria are. Unfortunately,
they are proprietary.

About all we know, publicly, is that very few of the assessment techniques we use for other safecrit
systems are applicable to DLNN-based SW.

> Is the nuclear reactor context rich enough to offer a convincing parallel to the AV context? 

No, not by any means. I brought in this example to emphasise the issue concerning what we know about
system assessment.


Prof. Peter Bernard Ladkin, Bielefeld, Germany
Je suis Charlie
Tel+msg +49 (0)521 880 7319  www.rvs-bi.de

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20180416/15e74071/attachment-0001.sig>

More information about the systemsafety mailing list