[SystemSafety] a public beta phase ???

José Miguel Faria jmf at safeperspective.com
Wed Jul 20 13:15:09 CEST 2016


The discussion has several times touched what risk acceptance
level/principle are we willing to adopt:

On Sun, Jul 17, 2016 at 5:38 PM, Peter Bernard Ladkin <
ladkin at rvs.uni-bielefeld.de> wrote:

> On 2016-07-17 15:45 , Martyn Thomas wrote:
> > ..... Isn't it necessary to have adequate confidence (for some agreed
> meaning of 'adequate') that the new technology, with the "lessons learned"
> will have fewer accidents than the technology it replaces?
>
> That would be an MGS/GAMAB criterion. I could see others, for example a
> [...]


On Mon, Jul 18, 2016 at 12:15 PM, Les Chambers <les at chambers.com.au> wrote:

> The argument that 33,000 people are killed in accidents every year, so
> why should we care, is also drivel. None of these fatalities occurred
> because a driver trusted a system that couldn't be trusted.
>

On Tue, Jul 19, 2016 at 1:27 PM, Mike Ellims <michael.ellims at tesco.net>
 wrote:

> So if automated cars only reduce the fatal accident rate by 50% then that
> would still save 16,000 lives per year [...]
> The point here that a system doesn't have to be perfect to be useful
> or, more importantly a huge improvement a on the current situation.


Quoting from [1]:
"Many papers on risk acceptance have stressed the difference between the
personal responsibility for one’s own safety and third-party
responsibility. The public accepts very high levels of risk in the pursuit
of individual hobbies, high levels of risk for activities with
predominately individual responsibility (like road traffic), low levels of
risk for predominately third-party responsibility (like public transport),
and very low levels for exclusively thirdparty responsibility (like living
near chemical or nuclear power plants). Some risk acceptance proposals are
ignorant of this and suggest that the individual risk due to public
transport should equal the individual risk due to any other hazardous
technology, say 1/20 * minimum endogenous mortality. This neglects the
modal differences in responsibility and the fact that rail traffic is 75
times safer than road traffic (in Germany, 1981, based on passenger
fatalities per travel distance [...]"


Personally, it doesn't *feel *right to me that we content ourselves setting
a target of 'at least as good as'/'globalement au moins aussi bon [GAMAB]'.
The same way that railway and aviation industries have grown to reach
levels of safety that are order of magnitude above (human) road traffic, so
should automated road traffic.

How would you society, and you and me, react if suddenly the rail industry
come up with something like 'Currently, we are X times safer than road
traffic, so it's more than enough that we set for the next projects a 60% X
times lower target than usual; we'll still be significantly safer than road
traffic'. It can't be done. That's why railway projects are SIL-4. If it
has been achieved, can be achieved, but you'd target your new rail
transportation system to, say, SIL-2 and have a major accident, how would
you defend against an accusation of negligence?

If entering an automated car the "guarantee" given is that the drive will
be statistically as safe as an average human being on an average day, all I
need in order to feel safer is to say to myself  'OK, never mind, I'll
drive myself and be twice as attentive to the road as normal; that way I'm
already two times safer than with this machine'.

So, from the very start, autopilots should target a risk level that is
orders of magnitudes safer than a human. Otherwise, we are just heading
towards engineering negligence.

All that said, how to achieve it is a whole different story. The road
environment is order of magnitude more complex and more unconstrained that
railway. And how will autopilots go through the learning process before
incorporating all lessons learned? It could be that this will end up
excusing cars' autopilots, both legally and in people's minds, and the
quote from [1] will not be applicable to automated road traffic. It will be
interesting to see what's coming next in the forthcoming years, but I'd
definitely prefer to have finished the email in the previous paragraph.:)

José

** [1] H. H. Kron, On the Evaluation of Risk Acceptance Principles


On Wed, Jul 20, 2016 at 12:10 PM, Peter Bernard Ladkin <
ladkin at rvs.uni-bielefeld.de> wrote:

>
>
> On 2016-07-20 12:30 , Matthew Squair wrote:
> > .... Aviation stands as a classic example of how automating traditional
> > operator tasks and radically changing the role of the operator can have
> profound consequences,
>
> What "profound consequences" are you thinking of here?
>
> > ... Likewise in implementing vigilance systems, the
> > rail industry has a lot of experience (most of it unhappy) in how naive
> vigilance system design
> > doesn't actually achieve what you want.
>
> What unhappy experience are you thinking of here? Whose rail industry?
>
> PBL
>
> Prof. Peter Bernard Ladkin, Bielefeld, Germany
> MoreInCommon
> Je suis Charlie
> Tel+msg +49 (0)521 880 7319  www.rvs-bi.de
>
>
>
>
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20160720/b68021d7/attachment.html>


More information about the systemsafety mailing list