[SystemSafety] Open Autonomous Safety concept = hope or idealism?

Martin, BJ BJ.Martin at novasystems.com
Thu May 3 01:27:19 CEST 2018


My apologies for not seeing and acknowledging John Howard's first kick on this reference from last week.

My take on all this is that it's following a similar challenge path as 'whether/when/how to regulate UAS'.  Jurisdictional authorities are being pressed by governments to 'put up or shut up' on regulating so that the potential for societal and safety (reduced human error or risk exposure) benefits of these technologies can be accessed.

Without industry consensus or authority consensus on benchmarks acceptable safety, every developer is inventing their own safety wheel and no one authority is equipped (resources, methodology or competency) to assess it's adequacy. Even a 'Due Diligence' approach is difficult to define and grasp. All regulatory authorities understand their traditional rule sets and processes but in order to function as a government body they need a tractable process that they can staff to - not research projects. Quite the conundrum. Such promise, clearly attainable technology, no defendable basis on which to judge 'what is safe enough' and approve on basis of compliance.

The closest we've come is what I would term a balance of risk approach being devised by JARUS<http://jarus-rpas.org/content/jar-doc-06-sora-package> for Specific Operational Risk Assessments  on UAS.  Holistic Risk BowTie Models of standardised operational risk scenarios. Logical methods for classification of graduated assurance and integrity requirements for safety barriers in a mix of design and operations measures.  Not yet tested in anger in toto (to my knowledge), it's certainly a hell of a lot more robust than what's been used thus far in each one-off case.

Other views or experience?


--
BJ Martin
Nova Systems
Safety and Certification Capability Lead

-----Original Message-----
From: Peter Bernard Ladkin <ladkin at causalis.com>
Sent: Wednesday, 2 May 2018 6:53 PM
To: Martin, BJ <BJ.Martin at novasystems.com>; The System Safety List <systemsafety at techfak.uni-bielefeld.de>
Subject: Re: [SystemSafety] Open Autonomous Safety concept = hope or idealism?



On 2018-05-02 08:42 , Martin, BJ wrote:
>
> ..... Disturbingly enough there are no leading views from the folk who
> are in the industry or regulatory spaces. ISO26262 is only part of the
> answer and it's not mandatory or universally used.

Industrial companies rarely want to be externally constrained, even though they know they will be.

I have the impression that regulatory structures are being improvised at the moment in the US. The UK has some well-defined test areas with specific and substantial constraints on testing, for example Milton Keynes. Germany is about to come up with testing principles and areas, as far as I understand.

ISO 26262 is de facto mandatory in Europe. If your vehicle had an accident involving (partial) automation and you couldn't show development in conformity with ISO 26262 then you'd really be in the soup in most countries of which I am aware. AKAIK all German vehicle manufacturers develop in conformity with ISO 26262.

You are right that it doesn't solve the big problems, because all the big problems involve DLNNs and ISO 26262, like other safety-critical development standards, has nothing much to say about using DLNNs.

> I am sympathetic to the plight of the various jurisdictional
> authorities, and concerned that left to going it alone many unwise
> mis-steps will be taken along the way. They are being pressed all over
> the world by governments to 'put up or shut up' on regulating so that the potential societal and safety (reduced human error or risk exposure) benefits of these technologies can be accessed.

The German government convened an Ethics Commission, which reported in June 2017 https://www.bmvi.de/SharedDocs/DE/Publikationen/DG/bericht-der-ethik-kommission.pdf (in German).
They came up with 20 principles for autonomous and "networked" driving. It is relatively short to read, but unfortunately there is no translation into English. I didn't know about it until last week, when I attended the safe.tech conference put on by TÜV Süd in Munich, and a plenary talk was given by a member, Auxiliary Bishop Anton Losinger of Würzburg.

It is worth looking at. Bishop Losinger did make one odd comment. He thinks the trolley problem has been "solved" in Germany by Kantian ethics and the German Basic Law. He claims that inaction is the solution. You may not intervene (unless it concerns saving yourself). I find it odd that he thinks that follows from Kantian ethics (I don't see how). But even if it does, if such a "solution"
follows from Kantian ethics, that equally brings Kantian ethics into question. It is also worth noting that the German Basic Law prohibits action, even damage/injury/death-minimising action, if thereby someone dies who would not otherwise have done so (I don't know at the moment whether this is self-applicable, but that would in any case be a moot point).

Whether or not one agrees with the principles, there is a set of them with solid-enough provenance and they can be expected to guide German law on autonomous and highly-automated road vehicle decision behaviour.

It was interesting to note at safe-tech that a number of speakers claimed that Germany was way ahead of the US on both analysing and regulating autonomous-vehicle behaviour. One may expect regulation to proceed via the principles of the Ethics Commission, and Werner Damm talked on Traffic Sequence Charts, which he claimed was a formal way of describing traffic situations to be resolved. They are like highly-annotated (bloated?) Message Sequence Charts. The annotations are differential equations. One can certainly believe they are a formal way of describing such situations (unfortunately, we didn't get to see many of the details) but a suggestion of resolving them seemed to be the usual logical pie-in-the-sky (logical consequence in many of the logics devised for computer science and engineering situations is not often a very practical property to determine).

PBL

Prof. Peter Bernard Ladkin, Bielefeld, Germany MoreInCommon Je suis Charlie
Tel+msg +49 (0)521 880 7319  www.rvs-bi.de<http://www.rvs-bi.de>






-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20180502/2f3a9d76/attachment.html>


More information about the systemsafety mailing list