[SystemSafety] a public beta phase ???

Peter Bernard Ladkin ladkin at rvs.uni-bielefeld.de
Sun Jul 17 14:51:30 CEST 2016


Michael,

On 2016-07-17 12:47 , Michael Jackson wrote:
> Tesla Autopilot users have chosen to use the beta phase system, but other road users whom they 
> encounter have not. Would the strictly statistical approach assumed by John Naughton have felt equally 
> convincing if the Tesla had collided with a pedestrian instead of with a truck and the pedestrian 
> had died instead of the Tesla driver? 

I don't read Naughton as taking a "strictly statistical" approach, indeed I think he is suggesting
that a strictly statistical approach is impractical (pointing to the Rand study). I read him as
expressing an almost-classical utilitarian view.

Also, I didn't suggest his view was right, I just thought it was sensible. He raises some practical
issues involved in deciding whether a specific new technology with safety-critical aspects be
deployed in public space.

The specific accident needs to be causally investigated. Suppose a failure of the AP to recognise a
key part of the environment is identified as a causal factor, as well as the failure of the human
driver to recognise the situation and take avoiding action. There might be some discussion of the
known weaknesses of human supervisory control of automation. And we investigate the next accident,
and the next one, and so on. And we presume, or legislate, that the technology will be incrementally
adapted to the "lessons learned" from these analyses. That will happen, because the alternative is
that accidents are not investigated and lessons are not learned, and that is unacceptable.

Is that alone a way to proceed? Not by itself, for it specifies nothing about the specific duties of
care of the manufacturer in introducing the AP to the market in the first place. For example, Tesla
might say "keep your hands on the wheel". That seems sensible advice. But how does Tesla or a
regulatory authority decide when it may be appropriate to drop that condition? Suppose that, after a
few years, Tesla drivers are driving around on AP and, despite the "requirement", people almost
never have their hands in position for instant takeover. Do you say "OK, people aren't doing this,
so we have to strengthen the requirements on the kit", or do you say "OK, people aren't doing this,
so we need more traffic cops on the road writing them tickets for not doing so". (A practical
decision maker would likely opt for a mixture of both.)

Also, it would be tempting for car companies but morally questionable to construct procedural
requirements in such a way that a driver is almost always in violation of one or the other of them
if an accident occurs. The "fine print", if you like. For this could be a sophisticated way of
blaming the driver for everything. Someone has to decide what reasonable supervisory activity should
be required, and what level of assurance needs to be provided that drivers are capable of exercising
and actually exercise that level of supervisory control. One may anticipate something like a
standard, but standards in this area are currently dominated by the car companies, who are one party
to this issue, so how would we ensure an appropriate moral balance in such a standard?

One of the sensible issues Naughton introduces is the trade off between assimilating enough
experience to be sure of something, and, if it is eventually found to be beneficial, the suffering
that could have been avoided had a decision been made earlier. Medical people break off controlled
tests when benefits appear to be large before the test is concluded, and give the medicine/apply the
procedure to everyone qualified. A similar situation is almost bound to arise here, and I think it
laudable that he pointed it out. As a reasoning problem, it's quite subtle. Assurance is based upon
a risk analysis. Assurance is: being reasonably confident that a sufficiently low probability of
high-severity consequences pertains. To attain that assurance, you need to accumulate experience and
there are high-severity consequences of your waiting until that experience is accumulated. So there
is from the utilitarian point of view a second-level risk analysis to be performed: namely, of the
criteria you chose for your assurance conditions. Could you lower your confidence threshold in the
low probability given in the assurance condition, in order to save high-severity consequences of
waiting until you have accumulated the necessary experience required for a higher confidence?

These issues all arise no matter what the characteristics and severity of any one accident. They are
also different from the questions arising from trolleyology considerations. I think it is a good
thing simply to enumerate issues arising, without necessarily solving them.

PBL

Prof. Peter Bernard Ladkin, Bielefeld, Germany
MoreInCommon
Je suis Charlie
Tel+msg +49 (0)521 880 7319  www.rvs-bi.de





-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: OpenPGP digital signature
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20160717/059d085b/attachment.pgp>


More information about the systemsafety mailing list