[SystemSafety] a public beta phase ???

Mike Ellims michael.ellims at tesco.net
Tue Jul 19 17:09:01 CEST 2016


Hi Les,

First a correction it's been  I used the word " connivance" rather than "
convenience". Oh poop.

A couple of inconstancies here,

1. the Daily Mail article states that " Brown narrowly avoided a very
similar smash earlier this year, when his car did not notice a white truck
turning in on him on the freeway. Pictured, the truck cutting him up in
April". However brown posted this video on facebook and crowed about how the
Tesla did avoid the accident. He didn't have to intervene.

2. The photo at
http://www.autospies.com/images/users/Agent009/main/1naias-17.jpg proves
nothing; if you use google search you find it takes you to sites such as
http://windsorstar.com/business/electric-tesla-model-x-to-compete-with-premi
um-suvs-minivans?__lsa=4182-2da6 where the car is stationary and it shows
it's clearly a publicity shot taken on a showroom floor. The question is
whether the car allows you to do this while moving. Tesla say no and in the
US I believe it would be illegal for it to do so and Tesla should have known
that as their former head a regulatory compliance came from an established
player.

3. The timeline given doesn't match with other's I've see which state the
accident occurred 7 May and Tesla provided NHTSA with data logs in the 8th.
I'm not sure who is right, the NTHSA letter doesn't provide the necessary
information but does confirm that Tesla instigated the investigation i.e.
"The May 7, 2016 fatal crash that was reported to ODI by Tesla".

However trying to track that down I came across the following:
http://www.digitaltrends.com/cars/tesla-autopilot-examined-dutch-regulators/

Which suggests that Tesla obtained type approval in the EU which would
usually imply conformance to ISO 26262. If that is the case then there
should be a set of associated documents as you suggested i.e. requirements,
hazard analysis, test plans, validation reviews etc. Note however just as
with the FAA there will (or should be) too much material to audit
completely and what the regulator (or any auditor) will often do is check
the process and do deep dives where they think necessary.

I'm not sure about Mary Cummings' comments - we don't have any visibility of
what Tesla, or anyone else are up to and in America neither do the
regulators, in the US automotive companies self certify against regulations
and the regulators only get involved if there is a problem. I'm sure prof,
Cummings' has no more visibility than we do. As a general set of concerns
she is however correct. However as stated above the Dutch certification
authority should have had the visibility.


-----Original Message-----
From: Les Chambers [mailto:les at chambers.com.au]
Sent: 19 July 2016 13:24
To: 'Les Chambers'; 'Mike Ellims'; 'Peter Bernard Ladkin';
systemsafety at lists.techfak.uni-bielefeld.de
Subject: RE: [SystemSafety] a public beta phase ???

More reading on the Tesla issue. It speaks for itself.

http://www.teslaosterone.com/uncategorized/teslas-fatal-crash-6-unanswered-q
uestions/

Duke University robotics professor Mary Cummings testified at a U.S. Senate
hearing in March that the self-driving car community is “woefully
deficient” in its testing programs, “at least in the dissemination of
their test plans and data.”

In particular, she said she’s concerned about the lack of “principled,
evidenced-based tests and evaluations.”
-----------------
http://www.dailymail.co.uk/news/article-3671265/PICTURED-Ex-Navy-SEAL-died-s
elf-driving-car-crashed-truck-seen-showing-no-hands-autopilot.html

Joshua Brown, 40, was behind wheel of self-driving Tesla Model S car. He was
killed when the vehicle collided with a trailer truck in Florida in May.
Trucker claims Brown was watching a Harry Potter movie when he crashed but
Tesla says it is impossible to watch movies on its touchscreen.
My Comment: Is it really ? Refer:
http://www.autospies.com/images/users/Agent009/main/1naias-17.jpg

Elon Musk, the billionaire CEO of Tesla Motors, tweeted: 'Our condolences
for the tragic loss.' [of Joshua Brown] My comment: is that all Brown's
death rated? A tweet? I hope not.

-----------------

http://www.eetimes.com/document.asp?doc_id=1329257
Why Autonomous Cars Are ‘Absolutely Not Ready’
7 limitations in tech, tests & evaluations

The crux of the issue, according to Cummings, is twofold: There is a lack of
“principled, evidenced-based tests and evaluations” for autonomous cars,
and little leadership from federal regulators to create “a clear
certification process” for self-driving cars.

To make matters worse, little information about automakers’ test plans and
data is available today for experts to measure the performance of
self-driving cars. Put more bluntly, today’s self-driving community is
substituting demonstrations for rigorous testing, in Cummings’ opinion.

In her testimony, she laid out seven “limitations” in current self-driving
car technologies 1. Lack of evidence-based tests 2. Bad weather conditions
3. Vulnerability to malevolent or prankster intent 4. Gaming the
self-driving car (For example, “A $60 laser device can trick self-driving
cars  into seeing  objects  that  aren’t  there,” she said.) 5. What about
privacy and control of personal data?
6.  Regulatory agencies’ roles
(Cummings pointed out that “The U.S. government cannot and has not
maintained sufficient staffing in the number of people it needs who can
understand, much less manage, complex systems such as self‐driving cars.”)
7. Accumulating miles is no assurance for safety

What matters isn’t exactly the number of hours and miles the designed
system logged in in-house testing. It’s how the tests are designed, who
validates the results, whether procedures are peer-reviewed, and if a clear
certification process exists.

My comment:
My sense is that Mary Cummings' heart is in the right place but she is
missing quite a bit here (I'm assuming this is all she said, I may be
wrong).
Testing is definitely important but if you've got a hacked up system no
amount of testing will make it safe.
I would add:
1. Who wrote the system requirements and where they complete and correct as
they could be (did anyone write a requirement spec???) 2. Where the
requirements validated (were they validatable?) 3. What formalisms were used
to design and build these systems. What architectural patterns were used,
what formal models were used?
4. How observable was the software behaviour for the purposes of testing?
5. Is the code instrumented. How thorough is the error logging?
6. Who wrote the code? How experienced were they? How much code review
occurred? How experienced were the code reviewers?
7. What was the defect density in first-round unit, integration and system
testing?

Elon Musk is doing a great job in making artificial intelligence open. He
has donated 10,000,000 dollars to the open AI foundation. He is also opening
his Tesla designs to the public. The code is not open for very good security
reasons. So the gaming starts. Refer:
http://www.digitaltrends.com/cars/syscan-announces-10000-prize-hacking-tesla
/
$10,000 bounty on Model S hacks entices tinkerers, aggravates Tesla

Interesting times!

Les




-----Original Message-----
From: systemsafety [mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.
de] On Behalf Of Les Chambers
Sent: Tuesday, July 19, 2016 12:51 PM
To: 'Mike Ellims'; 'Peter Bernard Ladkin';
systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] a public beta phase ???

Mike

>Do you any evidence to back up that claim?
There are many videos on the web depicting drivers grabbing the wheel at the
last moment when the lane marker sensors fail. There was a video depicting a
man sitting in the back seat of his car. I can't find it though.
My point is that all automation systems have a mission. If they can't
perform that mission there must be a smooth hand off to manual control. Any
automation that requires split-second hand off (or you die) is a dangerous
object and should not be in the hands of untrained operators. Formula one
drivers maybe.

> But we'd never know because a) the data hasn't been collected on a
> routine
basis

There will never be enough data. If we wait around for it to appear more
people will die. What we need to do is apply judgement. The judgement the
engineering profession has accumulated over the last 50 years of computer
automation design. What we need is lead indicators. Accident data is a lag
indicator. Dead bodies are lag indicators. They are bad indicators.
Judgement is a lead indicator. When we see an organisation violating the
fundamental rules of safe control system design we speak out VERY LOUDLY.

We can't predict the future so because where human we make up narratives to
convince ourselves everything will be okay. That's what this journalist has
done. He says it's okay 33,000 people are killed every year. That's normal
don't worry. He is contributing to what has become a popular narrative,
community belief, pathological groupthink.
My narrative is: It's NOT okay if one single death is a result of bad
design. It's NOT okay if someone scratches their finger due to bad design.
This is the engineering discipline we admire. This is what sets us apart
from the un-engineer. Tesla's operational concept of "be careful but ... Oh
quick grab the wheel" is bad design. We've known for years it's bad design.
It needs to stop. OK!

Re: software upgrades. I don't care if it's one line or 10,000 lines. You
can introduce a bug with one line that can bring a system down. One of my
engineers once nearly destroyed 10,000,000 dollar chemical reactor with a
single bad configuration word in memory. Proven in use arguments are not
relevant to software. It is just too fluid. We should stop propagating this
BS. Another bad unsafe narrative.

And as for maintaining the rage, I'm sorry that's the only way human beings
will change course - through some emotional connection to a better idea,
often triggered by horrendous events like the unnecessary death of a human
being.

Blow, winds, and crack your cheeks! rage! blow!
You cataracts and hurricanoes, spout
Till you have drench'd our steeples, drown'd the cocks!
You sulphurous and thought-executing fires, Vaunt-couriers to oak-cleaving
thunderbolts, Singe my white head! And thou, all-shaking thunder, Smite flat
the thick rotundity o' the world!
Crack nature's moulds, an germens spill at once, That make ingrateful man!

It took rage over the death of 30 children in the Armagh rail disaster to
mobilise a government to enforce safety rules in the rail industry. We
should know better these days. How to manage technology churn that is. What
we don't need is engineers contributing to the wrong narrative.

ONE DEATH IS NOT OKAY. DO YOU HEAR ME SON!

Please adjust your narrative accordingly.

Les

-----Original Message-----
From: Mike Ellims [mailto:michael.ellims at tesco.net]
Sent: Tuesday, July 19, 2016 12:32 AM
To: 'Les Chambers'; 'Peter Bernard Ladkin';
systemsafety at lists.techfak.uni-bielefeld.de
Subject: RE: [SystemSafety] a public beta phase ???

Good afternoon Les,

> The argument that 33,000 people are killed in accidents every year, so
> why
should we care, is also
> drivel. None of these fatalities occurred because a driver trusted a
system that couldn't be trusted.

Do you any evidence to back up that claim? For example can you show that no
one was harmed because a lane keep feature failed to keep a vehicle in lane
or that no one was killed because they became over reliant of features such
as emergency brake assist or any one of the dozen or so driver assist
systems that the current crop of vehicles have (see video below)? I know
it's nearly impossible to prove a negative but I'm also pretty sure that if
we had access to all the relevant data we'd probably find someone, somewhere
who rolled their vehicle because of overconfidence in their ESC system. But
we'd never know because a) the data hasn't been collected on a routine basis
and b) vehicle manufactures don't in general have an mechanism to collect
such data. At least in this regard Tesla seems to be ahead of the game and I
suspect that if Tesla hadn't asked the NHTSA to investigate it's possible we
may never had found out about this either.

The problem here is that absence of evidence isn't the same evidence of
absence. Current systems may have issues we (or anyone) know about and I
know of at least one failure mechanism for ABS that would fool most
implementations from 10 years back (it may not now - I don't know) but ABS
(a driver aid) is mandatory in Europe as is ESC because statistically it's
believed to save lives. Neither provides a guarantee of either safety or
that the system will work all the time; but these days I wouldn't buy a car
without them.

The following video demonstrates some of the benefits of standard systems
(ABS, ESC,TC) https://www.youtube.com/watch?v=wR1SSxpKitE

And I found this which looks at AEB (autonomous emergency braking) and some
success and some failures.
https://www.youtube.com/watch?v=E_ZNG8cmnlw

Note the driver involved in the tests was British Touring Car champion and
has held an F1 super licence so he actually does know how to drive.


> The media needs to maintain the rage and keep reporting self driving
> car
fatalities.

Rage is the wrong concept. Rather the media needs to track the issue and
report it in some sort of sane balanced manner, or at least as sane and
balanced as the media seem to be able to muster these days. We should do
likewise.


> Every time you receive a software upgrade in your garage the safety
> claims
made on your current version
> minus one are null and void. The game starts over again. You drive out
> the
gate. You roll the dice.
> Thousands of lines of code have been changed, the potential for
> screwups
is high, exacerbated by the massive
> complexity of these AI fuelled applications.

Only the first and last sentences here are strictly correct, the rest to
some extent is exaggerated.

First for the driver is not necessarily the case the game starts completely
from zero. Tesla have stated that before release they have a lab test
program and they test the software in their own test fleet. I have no idea
what this comprises in detail or how much testing is done but at the level
we have visible information that is reasonably comparable with normal
practice in automotive.

Likewise to state that 1000's lines have been changed is hyperbole - we have
no visibility of what was changed. It might be 1000's of lines, it might be
one (same effect) or it may be zero, for example if it's an update to a
neural network then is could be zero code lines.


> exacerbated by the massive complexity of these AI fuelled applications.

To my mind that is the one point really on the money - how do you cope with
the complexity of AI based systems? You could potentially formally prove the
inference engines and perhaps much of the rest of the system but has been
noted elsewhere the specification of say a neural network or other
stochastically based system is a combination of the size and completeness of
the training AND test sets.

Questions that need to be addressed are
- how big is big enough for either set?
- how do you quantify diversity in either set?
- can you improve on a simple division of training/tests or are there ways
to use one test set in both ways, e.g. develop the test set from the
training set by changing vehicle colours/background etc.

> And furthermore, it takes years for an organisation to develop an
effective safety culture, matter cannot move
> faster than the speed of light nor can Tesla develop a culture that
> would
rival that of NASA or the aircraft
> industries in the short time they've been in business.

Neither of the two examples has an unblemished record as regards safety,
NASA have overseen the loss of several man carrying spacecraft i.e. Apollo 1
and two shuttles. Recently the aviation industry has given us battery fires
in the 777 and what could only be described as an imaginary safety case on
behalf of the Nimrod. Experience does not necessarily equate to safety it's
something that has to be continually worked on. Whether or not Tesla have it
I'm not sure we can say at this point in time.

Some of the things that do annoy me about reporting and discussion of this
event are:

1. The Tesla in question appears to not have noticed it was involved in a
crash and to have continued down the road without its roof until it ran off
into a field. That doesn't seem to be an appropriate reaction to an accident
and potentially it's worse than the original accident.

2. As is usual in the USA (but not Europe) the truck was not fitted with
side impact bars so a) to the Tesla's radar the road appeared clear (it
apparently classified the side of the vehicle as a road sign) and b) if the
vehicle had run into the bars the impact would possibly have been less bad.
It has been commented elsewhere that that fitting of under-run bars would
prevent 250 deaths a year in the US but after decades of trying it's still
not mandatory.



-----Original Message-----
From: systemsafety
[mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of
Les Chambers
Sent: 18 July 2016 12:15
To: 'Peter Bernard Ladkin'; systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] a public beta phase ???

PBL
John Naughton's article in the Guardian is not sensible, it is uninformed,
illogical and flat out wrong. Obviously written by a person has never had
the experience of writing code that could kill someone; never had to take a
community of untrained operators and put a highly automated system in their
hands from a starting position of total ignorance.
Telling a driver to behave responsibly and keep their hands on the wheel is
a bit like telling a gambler to gamble responsibly. If a car can drive
itself, untrained drivers will take advantage of this feature and put too
much trust in what is currently an untrustworthy system. Drivers put through
focused training will take these warnings seriously. Your average bunny who
can afford a Tesla but has had no training will not.
The argument that 33,000 people are killed in accidents every year, so why
should we care, is also drivel. None of these fatalities occurred because a
driver trusted a system that couldn't be trusted.
And lastly RE: Naughton's comment that "... mainstream media will have to
change the way they report self driving cars. Every time a Tesla or a Google
car is involved in a crash, by all means report it. But also report all the
human error crashes that occurred on the same day." Not so. The media needs
to maintain the rage and keep reporting self driving car fatalities. This is
probably the only way we will get the message through to the general public
that if you buy one of these cars you are taking a substantial risk. Every
time you receive a software upgrade in your garage the safety claims made on
your current version minus one are null and void. The game starts over
again. You drive out the gate. You roll the dice. Thousands of lines of code
have been changed, the potential for screwups is high, exacerbated by the
massive complexity of these AI fuelled applications. This is the new normal,
we are now beta testing safety critical systems on the public. PBL you might
as well put a clause in 61508 two okay this behaviour.
And furthermore, it takes years for an organisation to develop an effective
safety culture, matter cannot move faster than the speed of light nor can
Tesla develop a culture that would rival that of NASA or the aircraft
industries in the short time they've been in business. System safety has one
source: motivated people and it takes years to develop that motivation.
They and we will get there eventually but in the meantime the public has a
right to be made aware of the risks they are taking with these vehicles.

Les

-----Original Message-----
From: systemsafety
[mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of
Peter Bernard Ladkin
Sent: Sunday, July 17, 2016 7:29 PM
To: systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] a public beta phase ???

A very sensible comment from John Naughton today in The Observer

https://www.theguardian.com/commentisfree/2016/jul/17/self-driving-car-crash
-proves-nothing-tesla-autopilot

PBL

Prof. Peter Bernard Ladkin, Bielefeld, Germany MoreInCommon Je suis Charlie
Tel+msg +49 (0)521 880 7319  www.rvs-bi.de







_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus



_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE




More information about the systemsafety mailing list