[SystemSafety] Validating AI driven systems

Martyn Thomas martyn at thomas-associates.co.uk
Tue Apr 18 16:54:41 CEST 2017


On 18/04/2017 08:02, Peter Bernard Ladkin wrote:

> Knight's article from the MIT Tech Review concerns a third aspect, namely when the networks are
> trained/have trained themselves on masses of data ("deep learning"), how to explain
> decisions/predictions the network then makes.

It may not be necessary to understand "why" the decision was made. It
may be enough to come up with an acceptable story (true or false) after
the event. There is some evidence that this is what the human brain does
to explain an action that occurred a measurable time before there was
any conscious decision to carry out that action.

It's hard to be certain that a human is telling the truth. Why assume we
need certainty about an explanation given by a program?

>
> If you are talking about illness prediction, a large component of which has been known for a long
> time to have a statistical character, then you do need to conjure up explanations, for you can't put
> people on prophylactic therapy, which might itself be life-altering, without having a good idea why.
> "The machine said to do it" is not a good reason.

"In my judgement, it was the action that had the highest probability of
a good outcome" seems a good enough answer from doctors, so why not from
a program?

>
> However, I am not sure I see the issue with providing explanation as a major problem with
> traditional safety-critical applications. If the self-driving car does something funny, then that
> will be logged as an incident and the questions are those of liability and rectification. The car
> manufacturer may have the problem of trying to discover why its network algorithms provoked the
> anomalous behaviour, but it is not clear anyone else involved has that issue, certainly not the
> licensing authorities, who can just withdraw a licence to operate.

From this instance of the software in its current state, or from every
instance of the same or related software?

I doubt that it would be politically acceptable to take a fleet of
vehicles off the road permanently because of a single accident (or even
after a few dozen, if the incidence per million miles is lower than for
a human driver).

>
> If an adaptive control system does something funny, the question of how to rectify is non-trivial
> (you can't just reboot it, for then you lose the adaptive learning to that point, and that might
> land you in even deeper water). But, again, nobody except the manufacturer needs to know why it did
> what it did. The rectification task might be to recognise the parameters which led to the anomaly,
> and avoid those parameter values in the future.

Or just ignore it and carry on - which is exactly what you and I do when
we notice that we have made a mistake.

If you were to have a potentially fatal (but actually not serious) bike
accident that was your fault, would you "withdraw your licence to
operate"? Re-booting you isn't an option (outside some minor religious
sects).

Martyn





-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 560 bytes
Desc: OpenPGP digital signature
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20170418/4f287c9d/attachment.pgp>


More information about the systemsafety mailing list