[SystemSafety] Nature article on driverless cars

Tom Ferrell tom at faaconsulting.com
Mon Apr 16 13:49:21 CEST 2018

I have a lot of heartburn with this article, but I will pick on just one aspect.  A key part of the argument being made is that the automation needs to make its decision-making process transparent to the user 'in an understandable form.'  Sensor fusion on these vehicles is happening at a rate far beyond what a human operator could hope to understand and process even in a highly aggregated form.  Aside from the fact that the operator would be distracted by trying to take in this data even if presented in a HUD format, it would still remove them from the equivalent of the aviation 'see and avoid' scanning that every human driver is taught to accomplish (and would certainly count as a distraction).

-----Original Message-----
From: systemsafety [mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of Peter Bernard Ladkin
Sent: Monday, April 16, 2018 7:35 AM
To: jacksonma at acm.org
Cc: systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] Nature article on driverless cars


I guess I am not making my points very well. But I would like to continue trying to do so.

On 2018-04-16 11:38 , Michael Jackson wrote:
> Peter: 
> The title of the article is 'People must retain control of autonomous vehicles’. The point is made explicitly:
Yes. But I still don't find the argument as they give it persuasive. They also compare AVs with aerospace automation, as I suggested might be done, but they are comparing here apples with oranges.

I missed their argumentation about inherent unpredictability because I often read such articles not linearly, but by looking at the conclusions, then seeing how the reasoning is put together to establish them (based on what I think arguments are - e.g., http://philosophy.ucla.edu/wp-content/uploads/2016/10/WhatIsArg.pdf ). Since (what I thought were) the conclusions were weak, I missed the stronger points. Mea culpa.

Their aerospace comparison is apples and oranges because AVs are using DLNNs essentially, and commercial-aerospace systems avoid them. It is open to a AV-DLNN-advocate to argue that...

[begin putting-words-into-mouth]

...commercial aeronautics needs pilot supervision because (a) they are bound up in heavy, conservative regulations, and (b) they don't and won't use DLNNs. Further, they tried to use DLNNs in a research context and they worked pretty well. So maybe aeronautics should have figured out how to certify them as we are doing. And of course we have far more resources in play. And accidents due to mistaken technology only cost us a person or two, and not a few hundred each time as they do in aeronautics.

[end putting-words-into-mouth]

The authors of the Nature opinion are advocating human supervisory control (in their title, as you point out) but not addressing any of the known and unsolved issues with it. It is open to an AV-advocate to argue that "people ... retain[ing] control" brings you such problems and it leads to a worse solution than full AV without supervisory control. (Again, I am not advocating this point of view. I am merely expressing it so it can be debated.)


Prof. Peter Bernard Ladkin, Bielefeld, Germany MoreInCommon Je suis Charlie
Tel+msg +49 (0)521 880 7319  www.rvs-bi.de

More information about the systemsafety mailing list