[SystemSafety] Elephants, dinosaurs and integrating the VLA model

Les Chambers les at chambers.com.au
Thu Aug 3 13:48:01 CEST 2023


Peter
Your comment:
"Martyn already observed on 2023-06-27 that there are legal requirements which 
constrain deployment of safety-related systems. That legal requirement in the 
UK and Australia is 77 years old. Your question seems to be suggesting that you 
somehow think it, and other constraints, might no longer apply. Well, they do. 
As Martyn said "AI doesn't change that.
In the UK or Australia, developer and deployer must reduce risks ALARP.”


 is righteous 
 that is, if decreed by a king ruling by fiat (Latin for “let 
it be done”).

Legal requirements are one thing, usually coming into play to the right of 
"bang"; keeping the public safe in the first place is another more important 
issue.
The interesting question is, how does one PROVE (to an auditor or a judge) that 
one has reduced risks ALARP if one’s delivered system’s behaviour is initiated 
from a neural network? A dataset that cannot be interpreted, verified or 
validated thoroughly in process, and that changes after delivery. AI 
aficionados admit they don't understand why NNs can work so well or fail so 
unpredictably. 
Witness: https://dawnproject.com/

Case study: Elaine Herzberg, killed by a self-driving Uber in Tempe, Arizona in 
2018. The system did not classify her as a pedestrian because she was crossing 
without a crosswalk; the neural net did not include consideration for 
jaywalking pedestrians. 

These systems are famous for not knowing what they don't know and imposing 
their ignorance on the real world. Hannah Arendt was prescient: “It’s not so 
much that our models are false, but that they might become true”

Imagine yourself as an expert witness supporting Tesla in a similar situation. 
What section, subsection or footnote of IEC 61508 or ISO 26262 - or other 
standard - would you cite to prove Elon had applied best practice in his 
development life cycle?

Or, if you cannot pony up, would you agree that these standards are no longer 
fit for purpose in regulating the development of AI-integrated Safety-Critical 
systems?

And furthermore, please explain the purpose of these standards, if they cannot 
be instrumental in stopping the murder for money currently occurring on US 
roads? 

Les

PS: I note that Tesla’s full self-driving (FSD) feature is available in the UK 
as well as the US. It is not available in Australia or Germany. 

---------------------------
> On 2023-08-03 02:32 , Les Chambers wrote:
> > 
> > Can anyone on this list refer me to where in the standards one can obtain
> > guidance on how to engineer such a system safely?
> 
> That seems to be a question with a completely obvious answer.
> 
> Martyn already observed on 2023-06-27 that there are legal requirements which 
constrain deployment 
> of safety-related systems. That legal requirement in the UK and Australia is 
77 years old. Your 
> question seems to be suggesting that you somehow think it, and other 
constraints, might no longer 
> apply. Well, they do. As Martyn said "AI doesn't change that."
> 
> In the UK or Australia, developer and deployer must reduce risks ALARP.
> 
> How do you go about engineering any system such that risks are reduced ALARP, 
say in the UK? You 
> follow sector-specific functional safety standards if there are some, as well 
as the engineering 
> functional safety standard for E/E/PE systems, which is IEC 61508. This 
approach is regarded by the 
> regulator, at least in the UK, as appropriate to fulfill the ALARP 
requirement (although of course 
> the courts are the final arbiters of that).
> 
> PBL
> 
> Prof. i.R. Dr. Peter Bernard Ladkin, Bielefeld, Germany
> Tel+msg +49 (0)521 880 7319  www.rvs-bi.de



--
Les Chambers
les at chambers.com.au

https://www.chambers.com.au
https://www.systemsengineeringblog.com

+61 (0)412 648 992




More information about the systemsafety mailing list