[SystemSafety] a public beta phase ???

Matthew Squair mattsquair at gmail.com
Wed Jul 20 12:30:50 CEST 2016


Wow, so  many issues and perspectives :)

So, let's see...

What seems to be emerging is a conflict between the personal/professional
perspective on risk (see Les's comments) and the societal view of risk (See
Mike's comments). Sure from an individual perspective one can be concerned
about whether Tesla are being duly diligent but from a societal perspective
the implementation of these sort of technologies actually reduces the
overall death rate and therefore it's a (qualified) 'good thing'. This is
not really uncommon, no safety system I know of is ever perfect. But just
because say, smoke detectors, fail from time to time doesn't mean we banish
them, or if we do then more fool us for falling victim to the prosecutor's
fallacy.

However, that doesn't mean that I think we can or should let Tesla off the
hook easy, after all we have been down this route before. Aviation stands
as a classic example of how automating traditional operator tasks and
radically changing the role of the operator can have profound consequences,
so perhaps Tesla should be reading the history books? Likewise in
implementing vigilance systems, the rail industry has a lot of experience
(most of it unhappy) in how naive vigilance system design doesn't actually
achieve what you want. Again lessons learned but not transferred.

Final thought, given it's (apparently) a small cadre of users who are
flat-hatting around, perhaps the simplest solution is for Tesla to find
them and rescind their privileges via software update? I mean disciplinary
action is a traditional response to extreme violations of procedure.
Basically ground the %^$*9ers.


https://criticaluncertainties.com/2016/07/02/teslas-autopilot-and-automation-surprises/

On Tue, Jul 19, 2016 at 10:27 PM, Mike Ellims <michael.ellims at tesco.net>
wrote:

> Les,
>
> > There will never be enough data. If we wait around for it to appear more
> people will die.
> > What we need to do is apply judgement. The judgement the engineering
> profession has accumulated
> > over the last 50 years of computer automation design. What we need is
> lead
> indicators. Accident
> > data is a lag indicator. Dead bodies are lag indicators. They are bad
> indicators.
> > Judgement is a lead indicator. When we see an organisation violating the
> fundamental rules of safe
> > control system design we speak out VERY LOUDLY.
>
> I have two arguments here, the first which I pointed out yesterday is that
> judgement can only go so far. Boeing's little epic deploying lithium
> battery
> in the Dreamliner I believe demonstrates this. To some degree judgement is
> based on experience and when we are doing something that has not been done
> before (which sometimes is the whole point) judgement alone can only get
> you
> so far.
>
> The second argument is that we are entering an area where large scale AI
> systems are being developed and deployed. As this hasn't been done on
> anywhere this type of scale to date not all the lesions that have been hard
> learnt are going to be directly applicable. To some degree I would argue
> that what we usually considered to be tried and trusted development methods
> may no longer be up to the job. If you consider the number of possible
> different situations a automated cruise control system as deployed by
> Mercedes, BMW , Tesla etc can encounter in terms of object identification
> then for human reason the situation quickly becomes intractable. Perhaps
> data collection on a massive scale as per Google and Tesla is the only way
> forward. Quite frankly at this juncture I'm not sure we can make that call.
>
> > We can't predict the future so because where human we make up narratives
> to convince ourselves everything
> > will be okay. That's what this journalist has done. He says it's okay
> 33,000 people are killed every year.
> > That's normal don't worry. He is contributing to what has become a
> popular
> narrative, community belief,
> > pathological groupthink.
>
> I don't think that is a narrative at all, the point being made was that in
> 2014 there were 32,675 deaths due to motor accidents in the USA alone (I
> looked it up), which is down from a peak of 43,510 in 2005 despite there
> being more vehicles on the road. To put that in context total American
> combat deaths for the whole Vietnam war was 58,300.
>
> The point is that we (as a society) seem to accept the slaughter of a large
> number of people year in year out and accept this as a unavoidable
> consequence of the connivance of having a car. Society has been conditioned
> (to some extent) to accept this because that is just the way things are -
> how did it get this bad, one crash at a time. The argument that was being
> made was that we are making the huge fuss over one person in one car and
> what we should actually be doing is making the same fuss over each and
> every
> single person killed on the road and asking ourselves why we think that
> this
> is acceptable?
>
> Yes you are correct that any unnecessary death is unacceptable, however how
> much more unacceptable is 32,000 deaths per year and we don't really seem
> to
> notice? In 2014 I would hazard a guess none of drivers involved in a fatal
> accident was in a self driving vehicle. Despite what we all generally think
> (70% drivers think they are above average) when it comes down to it most of
> us are pretty rubbish at times. The Institute of Advanced Motorists
> analysed
> accident data from 2005-2009 to come up with the following reasons for road
> accidents.
>
> 65% driver error or reaction  - 34% loss of control
>                               - 20% failing to look properly
>                               - 12% poor turn or manoeuvre
>                               - 11% failed to judge other person's path or
> speed
> 31% injudicious action        - 15% travelled too fast for the conditions
>                               - 13% exceeded speed limit
>                               -  2% disobeyed give-way or stop sign
>
> Automated vehicles *should* be able to avoid large numbers of those errors
> e.g. failing to look properly. So if automated cars only reduce the fatal
> accident rate by 50% then that would still save 16,000 lives per year -
> 160,000 per decade just in the USA. If they were turn were responsible for
> an additional 10% of fatal accidents that would still a reduction of 30% or
> 12,800 lives.
>
> The point here that a system doesn't have to be perfect to be useful or,
> more importantly a huge improvement a on the current situation.
>
> > Re: software upgrades. I don't care if it's one line or 10,000 lines. You
> can introduce a bug with one
> > line that can bring a system down. One of my engineers once nearly
> destroyed 10,000,000 dollar chemical reactor
> > with a single bad configuration word in memory. Proven in use arguments
> are not relevant to software. It is
> > just too fluid. We should stop propagating this BS. Another bad unsafe
> narrative.
>
> For an AI system (or partial AI system) interacting with an almost
> infinitely complex environment is there a tractable alternative? If so what
> is it?
>
> Please note that I am not endorsing a free-for-all. I expect, as you do and
> as should everyone that whichever companies are introducing these systems
> that they apply best conventional practice to the design of the systems
> WHERE THEY CAN i.e.
> - that they perform appropriate rigorous hazard and failure analysis
> - that they follow best practice for the design of the system architecture
> and base operating software
>
> For the parts of the system that require leaning, whether it be object
> detection or whatever I expect they follow a well reasoned and defined
> process for developing the those parts of the system and likewise a well
> reasoned and defined process to approving those systems for release.
>
> > And as for maintaining the rage, I'm sorry that's the only way human
> beings will change course
> > - through some emotional connection to a better idea, often triggered by
> horrendous events like the
> > unnecessary death of a human being.
>
> I would agree with you that an emotional connection is necessary (and good)
> but I would argue that rage leads to solutions such as Brexit, Trump (or
> Drumph if you prefer) and Pauline Hanson. None of which may be a good
> thing.
>
> I would also argue that getting angry at a single accident like this is not
> helpful as there is far more to get angry about, starting with the huge
> number of people who die each year in accident and moving on to the number
> of excess deaths due to air pollution in London which at 9500 per year
> which
> exceeds the number of people killed in road accidents in the UK by a factor
> of 4. Don't get me started about climate change...
>
> However it's very hard to solve a problem like air pollution with just rage
> - it requires careful thought and long term commitment (which is where the
> emotional aspect comes in).
>
>
>
> -----Original Message-----
> From: Les Chambers [mailto:les at chambers.com.au]
> Sent: 19 July 2016 03:51
> To: 'Mike Ellims'; 'Peter Bernard Ladkin';
> systemsafety at lists.techfak.uni-bielefeld.de
> Subject: RE: [SystemSafety] a public beta phase ???
>
> Mike
>
> >Do you any evidence to back up that claim?
> There are many videos on the web depicting drivers grabbing the wheel at
> the
> last moment when the lane marker sensors fail. There was a video depicting
> a
> man sitting in the back seat of his car. I can't find it though.
> My point is that all automation systems have a mission. If they can't
> perform that mission there must be a smooth hand off to manual control. Any
> automation that requires split-second hand off (or you die) is a dangerous
> object and should not be in the hands of untrained operators. Formula one
> drivers maybe.
>
> > But we'd never know because a) the data hasn't been collected on a
> > routine
> basis
>
> There will never be enough data. If we wait around for it to appear more
> people will die. What we need to do is apply judgement. The judgement the
> engineering profession has accumulated over the last 50 years of computer
> automation design. What we need is lead indicators. Accident data is a lag
> indicator. Dead bodies are lag indicators. They are bad indicators.
> Judgement is a lead indicator. When we see an organisation violating the
> fundamental rules of safe control system design we speak out VERY LOUDLY.
>
> We can't predict the future so because where human we make up narratives to
> convince ourselves everything will be okay. That's what this journalist has
> done. He says it's okay 33,000 people are killed every year. That's normal
> don't worry. He is contributing to what has become a popular narrative,
> community belief, pathological groupthink.
> My narrative is: It's NOT okay if one single death is a result of bad
> design. It's NOT okay if someone scratches their finger due to bad design.
> This is the engineering discipline we admire. This is what sets us apart
> from the un-engineer. Tesla's operational concept of "be careful but ... Oh
> quick grab the wheel" is bad design. We've known for years it's bad design.
> It needs to stop. OK!
>
> Re: software upgrades. I don't care if it's one line or 10,000 lines. You
> can introduce a bug with one line that can bring a system down. One of my
> engineers once nearly destroyed 10,000,000 dollar chemical reactor with a
> single bad configuration word in memory. Proven in use arguments are not
> relevant to software. It is just too fluid. We should stop propagating this
> BS. Another bad unsafe narrative.
>
> And as for maintaining the rage, I'm sorry that's the only way human beings
> will change course - through some emotional connection to a better idea,
> often triggered by horrendous events like the unnecessary death of a human
> being.
>
> Blow, winds, and crack your cheeks! rage! blow!
> You cataracts and hurricanoes, spout
> Till you have drench'd our steeples, drown'd the cocks!
> You sulphurous and thought-executing fires, Vaunt-couriers to oak-cleaving
> thunderbolts, Singe my white head! And thou, all-shaking thunder, Smite
> flat
> the thick rotundity o' the world!
> Crack nature's moulds, an germens spill at once, That make ingrateful man!
>
> It took rage over the death of 30 children in the Armagh rail disaster to
> mobilise a government to enforce safety rules in the rail industry. We
> should know better these days. How to manage technology churn that is. What
> we don't need is engineers contributing to the wrong narrative.
>
> ONE DEATH IS NOT OKAY. DO YOU HEAR ME SON!
>
> Please adjust your narrative accordingly.
>
> Les
>
> -----Original Message-----
> From: Mike Ellims [mailto:michael.ellims at tesco.net]
> Sent: Tuesday, July 19, 2016 12:32 AM
> To: 'Les Chambers'; 'Peter Bernard Ladkin';
> systemsafety at lists.techfak.uni-bielefeld.de
> Subject: RE: [SystemSafety] a public beta phase ???
>
> Good afternoon Les,
>
> > The argument that 33,000 people are killed in accidents every year, so
> > why
> should we care, is also
> > drivel. None of these fatalities occurred because a driver trusted a
> system that couldn't be trusted.
>
> Do you any evidence to back up that claim? For example can you show that no
> one was harmed because a lane keep feature failed to keep a vehicle in lane
> or that no one was killed because they became over reliant of features such
> as emergency brake assist or any one of the dozen or so driver assist
> systems that the current crop of vehicles have (see video below)? I know
> it's nearly impossible to prove a negative but I'm also pretty sure that if
> we had access to all the relevant data we'd probably find someone,
> somewhere
> who rolled their vehicle because of overconfidence in their ESC system. But
> we'd never know because a) the data hasn't been collected on a routine
> basis
> and b) vehicle manufactures don't in general have an mechanism to collect
> such data. At least in this regard Tesla seems to be ahead of the game and
> I
> suspect that if Tesla hadn't asked the NHTSA to investigate it's possible
> we
> may never had found out about this either.
>
> The problem here is that absence of evidence isn't the same evidence of
> absence. Current systems may have issues we (or anyone) know about and I
> know of at least one failure mechanism for ABS that would fool most
> implementations from 10 years back (it may not now - I don't know) but ABS
> (a driver aid) is mandatory in Europe as is ESC because statistically it's
> believed to save lives. Neither provides a guarantee of either safety or
> that the system will work all the time; but these days I wouldn't buy a car
> without them.
>
> The following video demonstrates some of the benefits of standard systems
> (ABS, ESC,TC) https://www.youtube.com/watch?v=wR1SSxpKitE
>
> And I found this which looks at AEB (autonomous emergency braking) and some
> success and some failures.
> https://www.youtube.com/watch?v=E_ZNG8cmnlw
>
> Note the driver involved in the tests was British Touring Car champion and
> has held an F1 super licence so he actually does know how to drive.
>
>
> > The media needs to maintain the rage and keep reporting self driving
> > car
> fatalities.
>
> Rage is the wrong concept. Rather the media needs to track the issue and
> report it in some sort of sane balanced manner, or at least as sane and
> balanced as the media seem to be able to muster these days. We should do
> likewise.
>
>
> > Every time you receive a software upgrade in your garage the safety
> > claims
> made on your current version
> > minus one are null and void. The game starts over again. You drive out
> > the
> gate. You roll the dice.
> > Thousands of lines of code have been changed, the potential for
> > screwups
> is high, exacerbated by the massive
> > complexity of these AI fuelled applications.
>
> Only the first and last sentences here are strictly correct, the rest to
> some extent is exaggerated.
>
> First for the driver is not necessarily the case the game starts completely
> from zero. Tesla have stated that before release they have a lab test
> program and they test the software in their own test fleet. I have no idea
> what this comprises in detail or how much testing is done but at the level
> we have visible information that is reasonably comparable with normal
> practice in automotive.
>
> Likewise to state that 1000's lines have been changed is hyperbole - we
> have
> no visibility of what was changed. It might be 1000's of lines, it might be
> one (same effect) or it may be zero, for example if it's an update to a
> neural network then is could be zero code lines.
>
>
> > exacerbated by the massive complexity of these AI fuelled applications.
>
> To my mind that is the one point really on the money - how do you cope with
> the complexity of AI based systems? You could potentially formally prove
> the
> inference engines and perhaps much of the rest of the system but has been
> noted elsewhere the specification of say a neural network or other
> stochastically based system is a combination of the size and completeness
> of
> the training AND test sets.
>
> Questions that need to be addressed are
> - how big is big enough for either set?
> - how do you quantify diversity in either set?
> - can you improve on a simple division of training/tests or are there ways
> to use one test set in both ways, e.g. develop the test set from the
> training set by changing vehicle colours/background etc.
>
> > And furthermore, it takes years for an organisation to develop an
> effective safety culture, matter cannot move
> > faster than the speed of light nor can Tesla develop a culture that
> > would
> rival that of NASA or the aircraft
> > industries in the short time they've been in business.
>
> Neither of the two examples has an unblemished record as regards safety,
> NASA have overseen the loss of several man carrying spacecraft i.e. Apollo
> 1
> and two shuttles. Recently the aviation industry has given us battery fires
> in the 777 and what could only be described as an imaginary safety case on
> behalf of the Nimrod. Experience does not necessarily equate to safety it's
> something that has to be continually worked on. Whether or not Tesla have
> it
> I'm not sure we can say at this point in time.
>
> Some of the things that do annoy me about reporting and discussion of this
> event are:
>
> 1. The Tesla in question appears to not have noticed it was involved in a
> crash and to have continued down the road without its roof until it ran off
> into a field. That doesn't seem to be an appropriate reaction to an
> accident
> and potentially it's worse than the original accident.
>
> 2. As is usual in the USA (but not Europe) the truck was not fitted with
> side impact bars so a) to the Tesla's radar the road appeared clear (it
> apparently classified the side of the vehicle as a road sign) and b) if the
> vehicle had run into the bars the impact would possibly have been less bad.
> It has been commented elsewhere that that fitting of under-run bars would
> prevent 250 deaths a year in the US but after decades of trying it's still
> not mandatory.
>
>
>
> -----Original Message-----
> From: systemsafety
> [mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of
> Les Chambers
> Sent: 18 July 2016 12:15
> To: 'Peter Bernard Ladkin'; systemsafety at lists.techfak.uni-bielefeld.de
> Subject: Re: [SystemSafety] a public beta phase ???
>
> PBL
> John Naughton's article in the Guardian is not sensible, it is uninformed,
> illogical and flat out wrong. Obviously written by a person has never had
> the experience of writing code that could kill someone; never had to take a
> community of untrained operators and put a highly automated system in their
> hands from a starting position of total ignorance.
> Telling a driver to behave responsibly and keep their hands on the wheel is
> a bit like telling a gambler to gamble responsibly. If a car can drive
> itself, untrained drivers will take advantage of this feature and put too
> much trust in what is currently an untrustworthy system. Drivers put
> through
> focused training will take these warnings seriously. Your average bunny who
> can afford a Tesla but has had no training will not.
> The argument that 33,000 people are killed in accidents every year, so why
> should we care, is also drivel. None of these fatalities occurred because a
> driver trusted a system that couldn't be trusted.
> And lastly RE: Naughton's comment that "... mainstream media will have to
> change the way they report self driving cars. Every time a Tesla or a
> Google
> car is involved in a crash, by all means report it. But also report all the
> human error crashes that occurred on the same day." Not so. The media needs
> to maintain the rage and keep reporting self driving car fatalities. This
> is
> probably the only way we will get the message through to the general public
> that if you buy one of these cars you are taking a substantial risk. Every
> time you receive a software upgrade in your garage the safety claims made
> on
> your current version minus one are null and void. The game starts over
> again. You drive out the gate. You roll the dice. Thousands of lines of
> code
> have been changed, the potential for screwups is high, exacerbated by the
> massive complexity of these AI fuelled applications. This is the new
> normal,
> we are now beta testing safety critical systems on the public. PBL you
> might
> as well put a clause in 61508 two okay this behaviour.
> And furthermore, it takes years for an organisation to develop an effective
> safety culture, matter cannot move faster than the speed of light nor can
> Tesla develop a culture that would rival that of NASA or the aircraft
> industries in the short time they've been in business. System safety has
> one
> source: motivated people and it takes years to develop that motivation.
> They and we will get there eventually but in the meantime the public has a
> right to be made aware of the risks they are taking with these vehicles.
>
> Les
>
> -----Original Message-----
> From: systemsafety
> [mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of
> Peter Bernard Ladkin
> Sent: Sunday, July 17, 2016 7:29 PM
> To: systemsafety at lists.techfak.uni-bielefeld.de
> Subject: Re: [SystemSafety] a public beta phase ???
>
> A very sensible comment from John Naughton today in The Observer
>
>
> https://www.theguardian.com/commentisfree/2016/jul/17/self-driving-car-crash
> -proves-nothing-tesla-autopilot
>
> PBL
>
> Prof. Peter Bernard Ladkin, Bielefeld, Germany MoreInCommon Je suis Charlie
> Tel+msg +49 (0)521 880 7319  www.rvs-bi.de
>
>
>
>
>
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
>
>
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
>
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
>



-- 
*Matthew Squair*
BEng (Mech) MSysEng
MIEAust CPEng

Mob: +61 488770655
Email: MattSquair at gmail.com
Website: www.criticaluncertainties.com <http://criticaluncertainties.com/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20160720/a0856ece/attachment-0001.html>


More information about the systemsafety mailing list