[SystemSafety] Validating AI driven systems

David Crocker dcrocker at eschertech.com
Tue Apr 18 08:53:40 CEST 2017


The problem I see with the system monitor approach is the difficulty of providing the monitor with independent sensory inputs for it to do a good enough job. For example, we might want the system monitor to shutdown the vehicle if it attempts to drive through a red light or on the pavement. But the software may be using neural nets or similar opaque technologies to recognise red lights and identify the boundaries of the road.

On 18 April 2017 00:21:11 BST, Les Chambers <les at chambers.com.au> wrote:
>The advent of AI driven safety critical systems is about to render
>obsolete
>all engineering processes we currently apply to validating trusted
>systems.
>
> 
>
>The problem is well covered in Will Knight's insightfull article: The
>Dark
>Secret at the Heart of AI, MIT Technology review.
>
>https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai
>/
>
> 
>
>Knight's problem statement is as follows:
>
>"... it isn't completely clear how the car makes its decisions.
>Information
>from the vehicle's sensors goes straight into a huge network of
>artificial
>neurons that process the data and then deliver the commands required to
>operate the steering wheel, the brakes, and other systems. The result
>seems
>to match the responses you'd expect from a human driver. But what if
>one day
>it did something unexpected-crashed into a tree, or sat at a green
>light? As
>things stand now, it might be difficult to find out why. The system is
>so
>complicated that even the engineers who designed it may struggle to
>isolate
>the reason for any single action. And you can't ask it: there is no
>obvious
>way to design such a system so that it could always explain why it did
>what
>it did."
>
> 
>
>So here we have a new "ility": explainability  ... validate that!
>
> 
>
>Up to now the core principle of validation (and a prime indicator of
>integrity) is predictibility - a system's observable behaviour and
>nonfunctional properties must comply with its specification. But what
>if the
>specification says, "Think for yourself."
>
> 
>
>Researchers are working on it. Knight describes the murky world of AI
>thinking discovered by AI experts who attempt to probe a neural net's
>thought processes. Their discoveries are flat out creepy and of no
>material
>assistance to a V&V engineer tasked with validating a system in the
>here and
>now - which leads me to believe that not much validation is actually
>going
>on in the AI application domain. We get glimpses of this every now and
>then.
>For example, the system that processed parole applications that was
>found to
>be discriminating against black men. It turns out that its "deep
>learning"
>was from a community of racists.
>
> 
>
>The only solution I can offer is 85 years old, from Lord Moulton, an
>English
>judge. His view was that human action can be divided into three
>domains. At
>one end is the law at the other is free choice and between them is the
>realm
>of manners. In this realm Lord Moulton said, "lies a domain in which
>our
>actions are not determined by law but in which we are not free to
>behave in
>any way we choose. In this domain we act with greater or lesser freedom
>from
>constraint, on a continuum that extends from a consciousness of duty
>through
>a sense of what is required by public spirit, to good form appropriate
>in a
>given situation".
>
> 
>
>Projecting onto systems development we have the monitor program that
>enforces a set of simple rules on system behaviour. In chemical
>processing
>we called it "abort programming". When a production plant got to an
>unsafe
>state the abort program would shut it down. The abort program was
>written by
>an engineer highly experienced in the process technology. The abort
>programming team was totally separate from the control system
>development
>team. All this programming was done in the realm of the law. The law
>being,
>"Thou shall not hurt anyone or destroy equipment."
>
> 
>
>System monitors can be validated. All we have to decide is where we
>draw the
>red (shut down) lines, in the realm of manners or law. At least this is
>my
>humble opinion. Wiser heads may have other ideas.
>
> 
>
>Thoughts anyone?
>
> 
>
>Les
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
>-------------------------------------------------
>Les Chambers
>Director
>Chambers & Associates Pty Ltd
> <http://www.chambers.com.au> www.chambers.com.au
>
>Blog:  <http://www.systemsengineeringblog.com/>
>www.systemsengineeringblog.com
>
>Twitter:  <http://www.twitter.com/chambersles> @ChambersLes
>M: 0412 648 992
>Intl M: +61 412 648 992
>Ph: +61 7 3870 4199
>Fax: +61 7 3870 4220
> <mailto:les at chambers.com.au> les at chambers.com.au
>-------------------------------------------------
>
> 

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20170418/79d5a838/attachment-0001.html>


More information about the systemsafety mailing list