[SystemSafety] Nature article on driverless cars

Michael Jackson maj at jacksonma.myzen.co.uk
Mon Apr 16 09:19:26 CEST 2018


Peter: 

The article argues that AV cannot succeed except in very limited contexts. Can a car that has been honing its skills on California highways be shipped to Manhattan and put to work as a taxi there? Is what has been learned in California compatible with what would be needed in Manhattan? 

Is the nuclear reactor context rich enough to offer a convincing parallel to the AV context? 

— Michael

> On 16 Apr 2018, at 07:55, Peter Bernard Ladkin <ladkin at causalis.com> wrote:
> 
> 
> 
> On 2018-04-16 00:41 , Matthew Squair wrote:
>> I liked this comment.
>> 
>> “No one — not even an algorithm’s designer — can know precisely how an autonomous car will behave
>> under every circumstance.”
> I am in two minds about the article. It is not that I don't agree with much of what they say, for I
> do. It is an opinion piece and that is what it does - give opinions. I am finding it underplays the
> arguments. Your quote is a good example.
> 
> It is true. But it is also true of most software-based systems. It doesn't say what distinguishes
> autonomous road vehicles from, say, scram systems in nuclear reactors. But (according to our
> distinguished colleagues here) we can have high confidence in scram systems. Why can't we have
> equally high confidence in autonomous-vehicle algorithms? Or can we? I am missing the key.
> 
> PBL
> 
> Prof. Peter Bernard Ladkin, Bielefeld, Germany
> MoreInCommon
> Je suis Charlie
> Tel+msg +49 (0)521 880 7319  www.rvs-bi.de
> 
> 
> 
> 
> 
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE



More information about the systemsafety mailing list