[SystemSafety] Autonomously Driven Car Kills Pedestrian

Smith, Brian E. (ARC-TH) brian.e.smith at nasa.gov
Sat Mar 24 19:10:28 CET 2018


Slightly, but not entirely off topic.  See discussion below vis a vis driverless technologies…

Here’s a technique I’m beginning to use more and more frequently as an observational tool when sitting in on or actively participating in meetings here at NASA, at church, and on this forum.  I thought you might find it interesting.  It consists of four elements:

  1.  Facts & data – "raw knowledge" about which most people agree.
  2.  Logic – how facts are linked together to create a narrative or describe a testable reality.
  3.  Loyalities – what constraints a person or group is operating under (due to “loyalty “ to an organization or individual) that can cause people to cherry-pick facts and certain logical pathways.
  4.  Worldviews – life philosophies that can also cause selectivity in use of facts and logic.  In not an insignificant percentage of time, one’s worldview can be at odds with loyalties for very good reasons.  On the other hand, sometimes one’s worldview can reinforce loyalties.  There a couple of possible poles within this worldview: “Neats" consider that solutions should be elegant, clear, and provably correct.  “Scruffies" believe that intelligence is too complicated (or computationally intractable, such as AI) to be solved with homogeneous systems and first-principles equations that Neats usually can’t live without. Experts point out that due to the deep philosophical differences between neats and scruffies, neats may view scruffies’ methods as happenstance or insufficiently built, where scruffies might see neats’ methods as being restrictive and limiting the exploration of the goals in question.

I think some “techno-evangelists” for things like driverless cars may have aspects of both tendencies; maybe tending more toward the scruffy side that wants to explore innovative technologies without the constraints of needed safety cases.

Neats want elegant, clear, and maybe low–risk solutions to problems.  On the other hand, scruffies may embrace “fuzzier,” more diverse, or more ambiguous methods that support results. Neats vs. scruffies has also been described as “logical versus analogical” and “symbolic versus connectionist.”

I’m also reminded of one of the interview questions from Les Chambers: ""The fox knows many little things, but the hedgehog knows one big thing." - Archilochus (Greek poet, 680 BCE - 645 BCE)  Which do you think is the best approach, specialisation (the “neats” hedgehog) or having a broad knowledge of your application domain (the “scruffy” fox)?  Maybe not a precise parallel…

Brian

From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de<mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de>> on behalf of Ug Free <hugues.bonnin at free.fr<mailto:hugues.bonnin at free.fr>>
Date: Saturday, March 24, 2018 at 12:22 AM
To: Matthew Squair <mattsquair at gmail.com<mailto:mattsquair at gmail.com>>
Cc: "systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>" <systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>>
Subject: Re: [SystemSafety] Autonomously Driven Car Kills Pedestrian

I think you minimize the talent of the lawyers that a company like Uber can afford to appoint...

Regards,

Hugues

On 24 Mar 2018, at 01:03, Matthew Squair <mattsquair at gmail.com<mailto:mattsquair at gmail.com>> wrote:

I think Uber will come unglued in civil court.

If say the driver is legally deemed to not be in direct control but ‘supervising’ by the court then Uber is still liable for devising a method of supervision of an unsafe device that demonstrably doesn’t work, and it could be argued they could have reasonably known this in the circumstances*. If the argument turns that the driver is solely the culpable agent then as he’s also a Uber employee/contractor they’re still responsible for his actions. So, which ever way it turns Uber will carry the can, at least in a civil prosecution which is where this will get thrashed out I’d guess.

‘Move fast and break things’ indeed…

*As the conversation on this thread would indicate.


On 24 March 2018 at 4:16:49 am, Peter Bernard Ladkin (ladkin at causalis.com<mailto:ladkin at causalis.com>) wrote:


On 2018-03-23 17:40 , Michael Jackson wrote:
>
> So the responsibility in overseeing autonomous driving is worse than that of an old-fashioned
> driving instructor in a dual-control car, teaching an untrusted learner—you can’t even order
> the software to slow down: in short, it is far more demanding and stressful than driving the
> car yourself.
Spot on, as usual.

Woods and Sarter, in their seminal study of pilots using A320 automation, found it was worse than
that. When the situation got odd, rather than cutting out the automation and taking control ("first,
fly the airplane"), they found the crew inclined to try to debug the automation.

PBL

Prof. Peter Bernard Ladkin, Bielefeld, Germany
MoreInCommon
Je suis Charlie
Tel+msg +49 (0)521 880 7319 www.rvs-bi.de<http://www.rvs-bi.de>





_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE<mailto:systemsafety at TechFak.Uni-Bielefeld.DE>
_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE<mailto:systemsafety at TechFak.Uni-Bielefeld.DE>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20180324/6e408120/attachment.html>


More information about the systemsafety mailing list