[SystemSafety] Interesting new publication about safety for autonomous vehicles

Bruce Hunter brucer.hunter at gmail.com
Thu Jul 11 00:35:26 CEST 2019


Hi Eric,

Just a general comment on human reliability. The popular heuristic of human
dependability is about 1 error in 10 actions for non-disciplined activity
(Human Error Probability - IEC 62508), which seems to line up with the 10%
low demand failure rate figure. You can claim lower error rates with
intense training, checklists and well designed HMI.

Comparing technological failure rate to biological failure rate is fraught
with difficulty. Albeit that we are fallible in many ways, humans do have
an evolutionary advantage of individuality which complicates averaging
dependability. People will not always react the same way to the same
stimulus unlike systematic failures.

I think governments and the public are being driven (no pun intended) by
the attractiveness of apparent improvement in vehicle safety by replacing
the driver with cognitive technology.

Technology can also have unintended consequences when it replaces some
human safety responsibilities. Greg Ip's book, Foolproof, has some
interesting lessons-learnt on this subject.

Autonomous vehicle safety does seem to miss the rigour of driverless trains
despite missing the advantage of being confine to tracks.

We will see whether autonomous vehicles eventually really have the intended
improvements in safety promised by the removal of human drivers. I guess we
did not end up with people walking in front of horseless carriages waving
lamps...

Best regards,
Bruce Hunter

On Wed, 10 Jul 2019 at 11:22, Eric Scharpf <escharpf at exida.com> wrote:

> Hello Thierry and Paul,
>
> Although the analogy to the IEC 61508/61511/62061 SIL terminology can be
> helpful, the case of humans and cars may be better suited by the comparison
> to high demand or continuous mode failure. With a SIL 1 performance
> threshold of < 1 dangerous failure per 100,000 hours (~11 years), for
> safety instrumented functions acting more than once per year (or
> mathematically more than once per two proof tests), we should have a better
> correlation with the varying levels of human driving skill than with the
> idea of a failure probability of < 10% for a low demand emergency action.
> Then, when you consider all of the other potential independent (and not so
> independent) protection layers to prevent any accident, injury causing
> accidents or fatal accidents, you can make a more effective analysis to
> compare the human and automated capabilities.
>
> I do not begin to have the experience with human or autonomous driving to
> conduct this analysis at the level needed, but the initial assumptions
> which dictate the risk model should be carefully considered.
>
> Best regards,
> Eric Scharpf,
> Partner, exida Asia Pacific
>
>
> -----Original Message-----
> From: systemsafety [mailto:
> systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of paul
> cleary
> Sent: Wednesday, 10 July 2019 6:45 AM
> To: Coq, Thierry <Thierry.Coq at dnvgl.com>
> Cc: systemsafety at lists.techfak.uni-bielefeld.de
> Subject: Re: [SystemSafety] Interesting new publication about safety for
> autonomous vehicles
>
> Hi Thierry
>
> What is meant by 0.1 risk reduction, do you mean E 01?
>
>  Could you kindly explain?
>
> Regards
> Paul Cleary
>
> Sent from my iPhone
>
> > On 9 Jul 2019, at 19:20, Coq, Thierry <Thierry.Coq at dnvgl.com> wrote:
> >
> > Hi all,
> >
> > Reading the paper, I wonder if the requirement "as safe as the average
> driver" is good enough.
> > In usual functional safety systems, humans may claim 0.1 reduction of
> risk where safety-related actions are needed. On the other hand, SIL1
> starts at this 0.1 reduction and SIL4 is at 0.0001 reduction of risk.
> >
> > In other words, I would expect the community to develop automated
> systems that are much safer than the average driver, more safe than 99,99%
> of drivers, myself included, and not more safe that 50% of the drivers. Or
> it could be that the system of systems (human+automated systems) could
> achieve that rate of risk reduction, but not the automated systems on their
> own. In the same topic, there is the question of "giving back control to
> the human driver with enough time to take action". What would be that time?
> Some research seems to indicate that this time should be many seconds long,
> especially if the human driver needs to acquire situational awareness (ie
> taking his/her eyes off the movie been played on the screen)...
> >
> > What do you think?
> > Best regards,
> >
> > Thierry
> > -----Original Message-----
> > From: systemsafety
> > <systemsafety-bounces at lists.techfak.uni-bielefeld.de> On Behalf Of
> > Paul Sherwood
> > Sent: vendredi 5 juillet 2019 11:35
> > To: systemsafety at lists.techfak.uni-bielefeld.de
> > Subject: [SystemSafety] Interesting new publication about safety for
> > autonomous vehicles
> >
> > A friend mentioned this Intel press release [1] to me earlier this week,
> and then others pointed me at BMW's equivalent [2]. I believe the
> referenced paper [3] will be of interest to some folks on this list.
> >
> > br
> > Paul
> >
> > [1]
> > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.
> > businesswire.com%2Fnews%2Fhome%2F20190702005370%2Fen%2FAutomotive-Mobi
> > lity-Industry-Leaders-Publish-First-of-its-Kind-Framework&data=02%
> > 7C01%7Cthierry.coq%40dnvgl.com%7C3669074c26764527f1d308d7012c03f2%7Cad
> > f10e2bb6e941d6be2fc12bb566019c%7C1%7C1%7C636979160869140618&sdata=
> > JHhvm7hRXWw12nUbCTDNrBo4sJhNj8tAolSA2%2BSUA00%3D&reserved=0
> > [2]
> > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.
> > press.bmwgroup.com%2Fglobal%2Farticle%2Fdetail%2FT0298103EN%2Fautomoti
> > ve-and-mobility-industry-leaders-publish-first-of-its-kind-framework-f
> > or-safe-automated-driving-systems&data=02%7C01%7Cthierry.coq%40dnv
> > gl.com%7C3669074c26764527f1d308d7012c03f2%7Cadf10e2bb6e941d6be2fc12bb5
> > 66019c%7C1%7C1%7C636979160869140618&sdata=mOmpFpXdfDMdOlXLGi4HjQdq
> > oYMKyyERtWNGcSr3%2FoU%3D&reserved=0
> > [3]
> > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fnews
> > room.intel.com%2Fwp-content%2Fuploads%2Fsites%2F11%2F2019%2F07%2FIntel
> > -Safety-First-for-Automated-Driving.pdf&data=02%7C01%7Cthierry.coq
> > %40dnvgl.com%7C3669074c26764527f1d308d7012c03f2%7Cadf10e2bb6e941d6be2f
> > c12bb566019c%7C1%7C1%7C636979160869150613&sdata=nG804ZGA37YNbl3OZA
> > itFax6yGA4l8%2BWZ72gHmM%2BhJk%3D&reserved=0
> > _______________________________________________
> > The System Safety Mailing List
> > systemsafety at TechFak.Uni-Bielefeld.DE
> > Manage your subscription:
> > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
> > s.techfak.uni-bielefeld.de%2Fmailman%2Flistinfo%2Fsystemsafety&dat
> > a=02%7C01%7Cthierry.coq%40dnvgl.com%7C3669074c26764527f1d308d7012c03f2
> > %7Cadf10e2bb6e941d6be2fc12bb566019c%7C1%7C1%7C636979160869150613&s
> > data=HSE3Os8FjkgtzF1bdvSPPn5xhAZvQi5TKDz98ukcyFA%3D&reserved=0
> >
> > **********************************************************************
> > **************** This e-mail and any attachments thereto may contain
> > confidential information and/or information protected by intellectual
> property rights for the exclusive attention of the intended addressees
> named above. If you have received this transmission in error, please
> immediately notify the sender by return e-mail and delete this message and
> its attachments. Unauthorized use, copying or further full or partial
> distribution of this e-mail or its contents is prohibited.
> > **********************************************************************
> > **************** _______________________________________________
> > The System Safety Mailing List
> > systemsafety at TechFak.Uni-Bielefeld.DE
> > Manage your subscription:
> > https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription:
> https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription:
> https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20190711/be3fc0fa/attachment.html>


More information about the systemsafety mailing list