[SystemSafety] Fwd: Re: How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?

Philip Koopman Phil.Koopman at HushMail.com
Sat Apr 23 00:58:17 CEST 2016


I presented a paper on exactly this set of related problems at the SAE
World Congress last week.  Validating machine learning is for sure a
tough problem. So is deciding how ISO 26262 fits in.  And quality of the
training data.  And some other problems besides. Below is abstract and
pointer to the paper and presentation slides. Constructive feedback
welcome for follow-on work we are doing, although likely I will reply
individually rather than to the list. (Note that this paper was camera
ready before the RAND report was public.  Several folks have been
thinking about this topic for quite a while and just now are the results
becoming public.)

http://betterembsw.blogspot.com/2016/04/challenges-in-autonomous-vehicle.html

Challenges in Autonomous Vehicle Testing and Validation
        Philip Koopman & Michael Wagner
        Carnegie Mellon University; Edge Case Research LLC
        SAE World Congress, April 14, 2016

Abstract:
Software testing is all too often simply a bug hunt rather than a well
considered exercise in ensuring quality. A more methodical approach than
a simple cycle of system-level test-fail-patch-test will be required to
deploy safe autonomous vehicles at scale. The ISO 26262 development V
process sets up a framework that ties each type of testing to a
corresponding design or requirement document, but presents challenges
when adapted to deal with the sorts of novel testing problems that face
autonomous vehicles. This paper identifies five major challenge areas in
testing according to the V model for autonomous vehicles: driver out of
the loop, complex requirements, non-deterministic algorithms, inductive
learning algorithms, and fail operational systems. General solution
approaches that seem promising across these different challenge areas
include: phased deployment using successively relaxed operational
scenarios, use of a monitor/actuator pair architecture to separate the
most complex autonomy functions from simpler safety functions, and fault
injection as a way to perform more efficient edge case testing. While
significant challenges remain in safety-certifying the type of
algorithms that provide high-level autonomy themselves, it seems within
reach to instead architect the system and its accompanying design
process to be able to employ existing software safety approaches.


Cheers,
-- Phil

-- 
Phil Koopman -- koopman at cmu.edu -- www.ece.cmu.edu/~koopman


-------- Forwarded Message --------
Subject:     Re: [SystemSafety] How Many Miles of Driving Would It Take
to Demonstrate Autonomous Vehicle Reliability?
Date:     Fri, 22 Apr 2016 12:10:56 +0100
From:     Mike Ellims <michael.ellims at tesco.net>
To:     'Matthew Squair' <mattsquair at gmail.com>, 'Martyn Thomas'
<martyn at 72f.org>
CC:     'Bielefield Safety List' <systemsafety at techfak.uni-bielefeld.de>


Hi Matthew,



   >Â  Really if ever there was a solid economic argument for deploying
industrial scale formal method and proofs this would be it.



To a machine learning system? How would you provide a formal proof that
such a system had learnt the right response for all possible
circumstances? I can conceive that it could be applied to the algorithms
for learning but not to the learning itself. That is, you could show
that the learning system does what it was specified to do, assuming that
the specification is correct; but not that it was taught correctly or
completely. For that I suspect that you will need some sort of
statistical approach. How to do that is off course a major problem.



And Hi Martyn



   > Recertification after software change.  Or do we just accept the
huge attack surface that a fleet of AVs presents?



For “recertification” Goggle’s approach to date seems to be to
rerun all the driving done so far via simulation… I’m not sure what
your implying with the comment on attack surfaces. Some far, as far as I
can tell aside from updates there is not vehicle to vehicle
communications. GPS is probably vulnerable to spoofing and jamming which
could be an issue but one would hope that had been accounted for as it
would count as a sensor failure…



   > The way in which AVs could change the safety of the total road
transport system. Is anyone studying total accidents rather than AV
accidents?



Yes, lots and lots of people mostly government bodies that that collect
the accident data in the first place and they tend to commission
detailed studies from outside organization (that don’t quite answer
the question your interested in). In addition to that there are a few
manufacture/academic partnerships that study major road accidents in
forensic detail alongside police (I know of one in Germany and one in
the UK) which is intended to address many of the limitations to police
investigations. In addition some of the big auto manufactures have their
own departments e.g. VW have their own statistics department looking at
this. In addition there is a large academic community concerned
examining traffic accidents.



As an aside, some time ago we were discussing wheels fall off of cars. I
attempted to track down an answer to this from the online traffic stats
as there is a field for it in the STATS19 form (filled out by police).
However with some digging via email and a couple of phone calls to the
Dept. of Transport it stopped dead with no answer because it’s a
write-in field on the form and the data isn’t transferred to any of
the computer systems. If it’s not on the computer they don’t want to
know.



Cheers.












More information about the systemsafety mailing list