[SystemSafety] Nature article on driverless cars

Michael Jackson maj at jacksonma.myzen.co.uk
Mon Apr 16 11:38:22 CEST 2018


Peter: 

The title of the article is 'People must retain control of autonomous vehicles’. The point is made explicitly:

“In our view, some form of human intervention will always be required. Driverless cars should be treated much like aircraft, in which the involvement of people is required despite such systems being highly automated. Current testing of autonomous vehicles abides by this principle. Safety drivers are present, even though developers and regulators talk of full automation.”

and: 

“In our view, some form of human intervention will always be required. Driverless cars should be treated much like aircraft, in which the involvement of people is required despite such systems being highly automated. Current testing of autonomous vehicles abides by this principle. Safety drivers are present, even though developers and regulators talk of full automation.”

As we have observed earlier, the comparison with human control and oversight of autopilot suggests that reliable human oversight is not possible. Critical circumstances arise much faster in cars, and the necessary constant awareness of context and readiness to act is beyond human capacities. I can’t see that remote oversight of multiple vehicles is feasible—if anything, it would be an even more impossible job. 

— Michael

> On 16 Apr 2018, at 09:46, Peter Bernard Ladkin <ladkin at causalis.com> wrote:
> 
> Michael,
> 
> On 2018-04-16 09:19 , Michael Jackson wrote:
>> The article argues that AV cannot succeed except in very limited contexts. Can a car that has been honing its skills on California highways be shipped to Manhattan and put to work as a taxi there? Is what has been learned in California compatible with what would be needed in Manhattan? 
> 
> I don't read that they are arguing that AV cannot succeed except in a limited way. I do read that
> the technology and its social context (liability) is nowhere near as mature as is assumed in current
> regulatory guidance for testing, which is a weaker point.
> 
> However, I did overlook an answer to my implicit question about the argument. It occurs in the
> section on liability, but then leads into a couple of points which seem to me to be peripheral.
> 
> [begin quote]
> 
> Worse, deep-learning algorithms are inherently unpredictable. They are built on an opaque
> decision-making process that is shaped by previous experiences. Each car will be trained
> differently. No one — not even an algorithm’s designer — can know precisely how an autonomous car
> will behave under every circumstance.
> 
> No law specifies how much training is needed before a deep-learning car can be deemed safe, nor what
> that training should be. Cars from different manufacturers could react in contrasting ways in an
> emergency. One might swerve around an obstacle; another might slam on the brakes. Rare traffic
> events, such as a truck tipping over in the wind, are of particular concern and, at best, make it
> difficult to train driverless cars.
> 
> Advanced interfaces are needed that inform users why an autonomous vehicle is behaving as it does.....
> 
> [end quote]
> 
> The key point for me is the first sentence, that DLNNs are inherently unpredictable. But they don't
> go on to say "so we cannot tell if they are working well or not."
> 
> They go on to say that "no law specifies how much training is needed...." and that "Advanced
> interfaces are needed..." Both points correct, but weak. The issue is way more than a gap in the
> law, resp. a lack of development of interfaces. It is that engineering cannot say how much training
> is needed, in what contexts, and provide reliable criteria for assessment.
> 
> Assessment is an open technical problem and we have every reason to think it won't be solved soon.
> For every other field of SW-based safecrit systems in which the public is a stakeholder, there are
> some rigorous assessment criteria (even if many of us worry that they are almost always inadequate).
> For example, IEC 61508 requires almost sixty different documents for safety-related SW. Most of them
> could not plausibly be written, or would be useless, for DLNN-driven autonomous vehicles running on
> public roads.
> 
> To repeat what I have written earlier, aerospace has been working on assessment of DLNNs for at
> least two decades, and has not got very far, in an operating environment I would claim is far
> simpler. The counter to that would be that the aerospace work is resource-limited and there is
> substantially more effort being put in to AVs because the potential market is far larger. The answer
> to which is to ask what the results have been and what the assessment criteria are. Unfortunately,
> they are proprietary.
> 
> About all we know, publicly, is that very few of the assessment techniques we use for other safecrit
> systems are applicable to DLNN-based SW.
> 
>> Is the nuclear reactor context rich enough to offer a convincing parallel to the AV context? 
> 
> No, not by any means. I brought in this example to emphasise the issue concerning what we know about
> system assessment.
> 
> PBL
> 
> Prof. Peter Bernard Ladkin, Bielefeld, Germany
> MoreInCommon
> Je suis Charlie
> Tel+msg +49 (0)521 880 7319  www.rvs-bi.de
> 
> 
> 
> 
> 



More information about the systemsafety mailing list