[SystemSafety] a public beta phase ???

paul_e.bennett at topmail.co.uk paul_e.bennett at topmail.co.uk
Sun Jul 17 15:11:51 CEST 2016


On 17/07/2016 at 1:51 PM, "Peter Bernard Ladkin" <ladkin at rvs.uni-bielefeld.de> wrote:
>
>Michael,
>
>On 2016-07-17 12:47 , Michael Jackson wrote:
>> Tesla Autopilot users have chosen to use the beta phase system, 
>but other road users whom they 
>> encounter have not. Would the strictly statistical approach 
>assumed by John Naughton have felt equally 
>> convincing if the Tesla had collided with a pedestrian instead 
>of with a truck and the pedestrian 
>> had died instead of the Tesla driver? 

[%X}

>Is that alone a way to proceed? Not by itself, for it specifies 
>nothing about the specific duties of
>care of the manufacturer in introducing the AP to the market in 
>the first place. For example, Tesla
>might say "keep your hands on the wheel". That seems sensible 
>advice. But how does Tesla or a
>regulatory authority decide when it may be appropriate to drop 
>that condition? Suppose that, after a
>few years, Tesla drivers are driving around on AP and, despite the 
>"requirement", people almost
>never have their hands in position for instant takeover. Do you 
>say "OK, people aren't doing this,
>so we have to strengthen the requirements on the kit", or do you 
>say "OK, people aren't doing this,
>so we need more traffic cops on the road writing them tickets for 
>not doing so". (A practical
>decision maker would likely opt for a mixture of both.)
>
>Also, it would be tempting for car companies but morally 
>questionable to construct procedural
>requirements in such a way that a driver is almost always in 
>violation of one or the other of them
>if an accident occurs. The "fine print", if you like. For this 
>could be a sophisticated way of
>blaming the driver for everything. Someone has to decide what 
>reasonable supervisory activity should
>be required, and what level of assurance needs to be provided that 
>drivers are capable of exercising
>and actually exercise that level of supervisory control. One may 
>anticipate something like a
>standard, but standards in this area are currently dominated by 
>the car companies, who are one party
>to this issue, so how would we ensure an appropriate moral balance 
>in such a standard?

[%X]

One ought to bear in mind what the ultimate target is for developing these
autonomous vehicle technologies. It is early days yet and it looks like it
will continue to be companies with very deep pockets that will be at the
leading line for a while to come, not only from being able to fund the 
developments but also being able to ride out any subsequent problems
that may occur during the roll-out of such technology.

It is right that any such incidents are fully investigated for the correct 
causality so that lessons are learned (not just the human driver being in 
error confirmation that has plagued some early air accident investigations).
Then we may learn lessons about the machine perception values that have 
been programmed in and whether they may need adjustment.

Ultimately I expect we are aiming that the entire automotive vehicle pool 
should become autonomous with goods and people carried by vehicles
that no longer have steering wheels for a driver to interact with. A sort of 
"Johnny Cab" future perhaps? Of course, to get there we need very good 
systems engineers who can construct very trusted systems.

Regards

Paul E. Bennett IEng MIET
Systems Engineer

-- 
********************************************************************
Paul E. Bennett IEng MIET.....<email://Paul_E.Bennett@topmail.co.uk>
Forth based HIDECS Consultancy.............<http://www.hidecs.co.uk>
Mob: +44 (0)7811-639972
Tel: +44 (0)1392-426688
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************



More information about the systemsafety mailing list