[SystemSafety] How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?

Mike Ellims michael.ellims at tesco.net
Mon Jun 13 00:43:37 CEST 2016


This is in two parts because a few interesting things have been happening with “Honest Elon’s Autos” (i.e. Tesla) and it’s simpler to break the replies to Martyns question out.

 

Part 1.

There have been three crashes reported in various EV blogs and forums where a Tesla has hit something it shouldn’t have.

The first was where a Tesla that parked itself into the back of a truck it had been parked behind. Tesla say it’s the drivers  fault as the vehicle logs state he used the summon feature so for the moment that looks like a standoff. However there appear to be several oddities in the way the feature is (or was; it’s been updated) implemented that would seem to allow this to happen in error.

 

The other two appear to be where the vehicle requested the drive take control and before they could do so effectively the car drove into the back of stationary vehicles, there is a animated GIF of one of the accidents here;

 

http://giphy.com/gifs/3o6EhCnV9fgZQgw6xG?utm_source=iframe <http://giphy.com/gifs/3o6EhCnV9fgZQgw6xG?utm_source=iframe&utm_medium=embed&utm_campaign=tag_click> &utm_medium=embed&utm_campaign=tag_click

 

Actually there may be three accidents but  I can’t really separate out all the different sources.

 

Tesla’s current take on this appears to be that Autopilot isn’t a autonomous system and the driver should always be ready to take control. In addition  autopilot should only be used on highways where pedestrians and cyclists are not present as it can’t always detect them.

 

Tesla owners have taken this advice (it’s in the manual) to heart as demonstrated in the video below…

 

https://www.youtube.com/watch?v=sXls4cdEv7c

 

However the most interesting thing to come out in the last week or so which relates to the starting point of this thread is the fact that Tesla are able to run software on the vehicles in parallel to the software in control as an “inert” feature, some details at,

 

http://spectrum.ieee.org/cars-that-think/transportation/self-driving/tesla-reveals-its-crowdsourced-autopilot-data/?utm_source=CarsThatThink <http://spectrum.ieee.org/cars-that-think/transportation/self-driving/tesla-reveals-its-crowdsourced-autopilot-data/?utm_source=CarsThatThink&utm_medium=Newsletter&utm_campaign=CTT06082016> &utm_medium=Newsletter&utm_campaign=CTT06082016

 

I can’t decide if this is insanely brilliant, insanely stupid or insanely evil. Possibly all three.

 

Part 2.

 

Martyn wrote.. “The AVs depend on software that is occasionally updated. They depend on data that is occasionally updated. They depend on sensors that could be jammed, flooded or spoofed…. (see below)”

 

My comment on simulation was directed only at the stochastic  algorithms  used in driving e.g. object detection etc. My view is many of the other problems have to be dealt with by design e.g. ensuring that the attack surface is minimized by separating systems e.g. control  separate from entertainment (hardware, networks etc. ), good hazard and risk analysis and so on.

 

Martyn wrote “You misunderstand me - probably because I was not clear enough. I meant to ask whether anyone is currently studying the impact that AVs are having (and will have) on the overall safety of the total road transport system.”

 

In the few papers I could track down the expectation seems to be that automated systems should reduce the number of accidents just given the volume that are caused by human error but that new classes of accident will come into existance. Only limited studies (I found 2) have been performed to date on the small numbers of competently autonomous vehicles (fleet sizes are tiny)  and experience from Tesla is probably mostly not relevant as it’s only meant to be a level 3 system for use on highways.

 

I think the current answer is who knows, for now it’s a research topic but all the points you made have been discussed in what literature I’ve seen.

 

Martyn wrote “Should there be a safety argument that the introduction of AVs will not reduce the safety of the road transport system, rather than a safety argument that AVs are as safe or safer than cars driven by humans?”.

 

Possibly, but the safety of the system in total is dominated by the number of errors that human drivers make. If AV’s are in general significantly safer than human drivers (cause fewer accidents and/or accident severity is lower) then is the case to be made that people shouldn’t be allowed to drive?

 

Cheers.

 

From: systemsafety [mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of Martyn Thomas
Sent: 23 April 2016 16:44
Cc: 'Bielefield Safety List'
Subject: Re: [SystemSafety] How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?

 

On 22/04/2016 12:10, Mike Ellims wrote:


... ...



And Hi Martyn

 

> Recertification after software change.  Or do we just accept the huge attack surface that a fleet of AVs presents?

 

For “recertification” Goggle’s approach to date seems to be to rerun all the driving done so far via simulation… I’m not sure what your implying with the comment on attack surfaces. Some far, as far as I can tell aside from updates there is not vehicle to vehicle communications. GPS is probably vulnerable to spoofing and jamming which could be an issue but one would hope that had been accounted for as it would count as a sensor failure…


The AVs depend on software that is occasionally updated. They depend on data that is occasionally updated. They depend on sensors that could be jammed, flooded or spoofed. Then (as has already been mentioned) car manufacturers connect other networked systems (bluetooth, phone, radio, TV ...) to internal networks that are also connected to safety-related subsystems. Everything that I have mentioned is a possible channel for cyberattack. When we have a fleet of AVs, that's a huge set of possible vectors for cyberattack (which I referred to as the "attack surface"). 

Now, let's imagine that Google has carried out exhaustive penetration testing (I know this is impossible - which makes the following argument even stronger) and that we agree that their AV is secure against all possible attacks. Then they release a software change. Re-running all the driving, through simulation, isn't enough. They have to rerun exhaustive pen testing too (which could involve all possible attacks under all possible driving conditions). Recertification feels to me like an important issue and I haven't heard anything that gives me confidence that anyone yet has a feasible approach to a solution.





 

> The way in which AVs could change the safety of the total road transport system. Is anyone studying total accidents rather than AV accidents?

 

Yes, lots and lots of people mostly government bodies that that collect the accident data in the first place and they tend to commission detailed studies from outside organization (that don’t quite answer the question your interested in). In addition to that there are a few manufacture/academic partnerships that study major road accidents in forensic detail alongside police (I know of one in Germany and one in the UK) which is intended to address many of the limitations to police investigations. In addition some of the big auto manufactures have their own departments e.g. VW have their own statistics department looking at this. In addition there is a large academic community concerned examining traffic accidents.


You misunderstand me - probably because I was not clear enough. I meant to ask whether anyone is currently studying the impact that AVs are having (and will have) on the overall safety of the total road transport system. For example, will the knowledge (by drivers, cyclists, pedestrians ...) that many vehicles are AVs change the behaviour of these other road users in a way that changes the frequency of accidents in which an AV is not deemed to have been at fault (and in which it may not even have been involved)? 

To illustrate what I mean with just one, very small, example, cyclists might get used to AVs passing them with a wider clearance than is the normal behaviour of human drivers. (This should happen because the code of acceptable driving - called the Highway Code in the UK, for instance - sets a standard that many drivers currently forget or ignore). This could change cyclists' behaviour, after some time, in a way that leads them to have more accidents with cars that have human drivers. It's possible even that the overall rate of accidents between cars and cyclists would rise as a consequence of introducing AVs, even though the AVs had may fewer accidents with cyclists than the average for non-AVs before their introduction.

Should there be a safety argument that the introduction of AVs will not reduce the safety of the road transport system, rather than a safety argument that AVs are as safe or safer than cars driven by humans?

Martyn





 

 



---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20160612/56720d0b/attachment-0001.html>


More information about the systemsafety mailing list