[SystemSafety] Elephants, dinosaurs and integrating the VLA model

Steve Tockey steve.tockey at construx.com
Fri Aug 4 07:37:15 CEST 2023


Les,
I am not at all sure where you are going with this line of reasoning. “The Problem of Dirty Hands” is, as far as I can tell, an issue of, “does the ends justify the means?”. Can it be better to intentionally incur a small amount of immorality in one place in exchange for a larger amount of benefit somewhere else? Of course the answer is, “It depends”. It depends on the consequences of the “small amount” of immorality (who is harmed? How much?) in comparison with how big the larger benefit is (who benefits? How much?) and over what time frame.

But all of that has absolutely nothing at all to do with—as far as I can tell—the Elaine Herzberg case. Her case was not an intentional moral trade-off between a small amount of immorality here for a larger benefit there. It was nothing less than sheer, utter stupidity on the part of that system’s developers. That “trade” never needed to happen if those developers were at all competent. In what universe is it considered competent engineering to:

1) Not have a default catch-all category for “some object of an unknown type that is in a position to collide with me and thus cause harm”? How do they categorize wildlife on a roadway? How do they categorize furniture that fell off of a lorry one or two lanes over? How do they categorize bricks dropped on cars from roadway overpasses (which seems to be a current epidemic in Seattle). They can’t possibly have specific, concrete categories for all conceivable encounter-able objects that could cause a serious accident.

2) Quoting the article, "if the perception system changes the classification of a detected object, the tracking history of that object is no longer considered when generating new trajectories” followed by, “What this meant in practice was that, because the system couldn't tell what kind of object Herzberg and her bike were, the system acted as though she wasn't moving.”

The root cause of Elaine Herzberg’s death is nothing less than massive, blatant, sheer incompetence on the part of one or more of those developers. It never needed to have happened in the first place. There was never any trade-off to be made here.


— steve





On Aug 4, 2023, at 11:19 AM, Les Chambers <les at chambers.com.au> wrote:

Steve
I see. Have we a technological variant of "The Problem of Dirty Hands" here?
https://plato.stanford.edu/entries/dirty-hands/
Indeed we may have a 21st-century channelling of Anthony Trollope's novel, The 
Way We Live Now, . the praiseworthy deeds of the powerful escape the normal 
categories of morality. 

Lady Carbury:  "If a thing can be made great and beneficent, a boon to 
humanity, simply by creating a belief in it, does not a man become a benefactor 
to his race by creating that belief?"
"At the expense of veracity?" suggested Mr. Booker.
"At the expense of anything?" rejoined Lady Carbury with energy. "One cannot 
measure such men by the ordinary rule."
"You would do evil to produce good?" asked Mr. Booker.
"I do not call it doing evil..You tell me this man may perhaps ruin hundreds, 
but then again he may create a new world in which millions will be rich and 
happy."
"You are an excellent casuist, Lady Carbury."
"I am an enthusiastic lover of beneficent audacity," said Lady Carbury.

Les

> Les wrote:
> 
> â?oCase study: Elaine Herzberg, killed by a self-driving Uber in Tempe, 
Arizona in
> 2018. The system did not classify her as a pedestrian because she was 
crossing
> without a crosswalk; the neural net did not include consideration for
> jaywalking pedestrians.�
> 
> The whole story is a bit more complicated than that. Here is a good summary 
of the NTSB report:
> 
> <https://arstechnica.com/cars/2019/11/how-terrible-software-design-decisions-
led-to-ubers-deadly-2018-crash/>
> [Screen-Shot-2019-11-06-at-4.10.29-PM-760x380.png]
> How terrible software design decisions led to Uberâ?Ts deadly 2018 
crash<https://arstechnica.com/cars/2019/11/how-terrible-software-design-
decisions-led-to-ubers-deadly-2018-crash/>
> arstechnica.com<https://arstechnica.com/cars/2019/11/how-terrible-software-
design-decisions-led-to-ubers-deadly-2018-crash/>
> 
> â?oImagine yourself as an expert witness supporting Tesla in a similar 
situation.
> What section, subsection or footnote of IEC 61508 or ISO 26262 - or other
> standard - would you cite to prove Elon had applied best practice in his
> development life cycle?�
> 
> I find it hard to believe (nah, impossible actually) that I would ever agree 
to be an expert witness on the side of someone like Tesla. Rather, I think 
being an expert witness on the other side would be far easier. It would 
probably be almost trivial to prove Elon had NOT applied anything remotely 
similar to best practice in his development life cycle.
> 
> â?" steve
> 
> On Aug 3, 2023, at 7:48 PM, Les Chambers <les at chambers.com.au> wrote:
> 
> Peter
> Your comment:
> "Martyn already observed on 2023-06-27 that there are legal requirements 
which
> constrain deployment of safety-related systems. That legal requirement in the
> UK and Australia is 77 years old. Your question seems to be suggesting that 
you
> somehow think it, and other constraints, might no longer apply. Well, they 
do.
> As Martyn said "AI doesn't change that.
> In the UK or Australia, developer and deployer must reduce risks ALARP."
> 
> . is righteous . that is, if decreed by a king ruling by fiat (Latin for "let
> it be done").
> 
> Legal requirements are one thing, usually coming into play to the right of
> "bang"; keeping the public safe in the first place is another more important
> issue.
> The interesting question is, how does one PROVE (to an auditor or a judge) 
that
> one has reduced risks ALARP if one's delivered system's behaviour is 
initiated
> from a neural network? A dataset that cannot be interpreted, verified or
> validated thoroughly in process, and that changes after delivery. AI
> aficionados admit they don't understand why NNs can work so well or fail so
> unpredictably.
> Witness: https://dawnproject.com/
> 
> Case study: Elaine Herzberg, killed by a self-driving Uber in Tempe, Arizona 
in
> 2018. The system did not classify her as a pedestrian because she was 
crossing
> without a crosswalk; the neural net did not include consideration for
> jaywalking pedestrians.
> 
> These systems are famous for not knowing what they don't know and imposing
> their ignorance on the real world. Hannah Arendt was prescient: "It's not so
> much that our models are false, but that they might become true"
> 
> Imagine yourself as an expert witness supporting Tesla in a similar 
situation.
> What section, subsection or footnote of IEC 61508 or ISO 26262 - or other
> standard - would you cite to prove Elon had applied best practice in his
> development life cycle?
> 
> Or, if you cannot pony up, would you agree that these standards are no longer
> fit for purpose in regulating the development of AI-integrated Safety-
Critical
> systems?
> 
> And furthermore, please explain the purpose of these standards, if they 
cannot
> be instrumental in stopping the murder for money currently occurring on US
> roads?
> 
> Les
> 
> PS: I note that Tesla's full self-driving (FSD) feature is available in the 
UK
> as well as the US. It is not available in Australia or Germany.
> 
> ---------------------------
> On 2023-08-03 02:32 , Les Chambers wrote:
> 
> Can anyone on this list refer me to where in the standards one can obtain
> guidance on how to engineer such a system safely?
> 
> That seems to be a question with a completely obvious answer.
> 
> Martyn already observed on 2023-06-27 that there are legal requirements which
> constrain deployment
> of safety-related systems. That legal requirement in the UK and Australia is
> 77 years old. Your
> question seems to be suggesting that you somehow think it, and other
> constraints, might no longer
> apply. Well, they do. As Martyn said "AI doesn't change that."
> 
> In the UK or Australia, developer and deployer must reduce risks ALARP.
> 
> How do you go about engineering any system such that risks are reduced ALARP,
> say in the UK? You
> follow sector-specific functional safety standards if there are some, as well
> as the engineering
> functional safety standard for E/E/PE systems, which is IEC 61508. This
> approach is regarded by the
> regulator, at least in the UK, as appropriate to fulfill the ALARP
> requirement (although of course
> the courts are the final arbiters of that).
> 
> PBL
> 
> Prof. i.R. Dr. Peter Bernard Ladkin, Bielefeld, Germany
> Tel+msg +49 (0)521 880 7319  www.rvs-bi.de
> 
> --
> Les Chambers
> les at chambers.com.au
> 
> https://www.chambers.com.au
> https://www.systemsengineeringblog.com
> 
> +61 (0)412 648 992
> 
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription: https://lists.techfak.uni-
bielefeld.de/mailman/listinfo/systemsafety



--
Les Chambers
les at chambers.com.au

https://www.chambers.com.au
https://www.systemsengineeringblog.com

+61 (0)412 648 992





More information about the systemsafety mailing list