[SystemSafety] Validating AI driven systems

Peter Bernard Ladkin ladkin at causalis.com
Fri Apr 21 07:22:56 CEST 2017



On 2017-04-20 14:18 , Robert P. Schaefer wrote:
>  A more general question. What is done, in the past, when a new technology exceeds the theory that explains it?
>  Can a parallel be made to exploding boilers in steam engines, cold-fusion, or superconduction?

I think first that it is helpful to make the following distinctions:

* 1. Primarily technological developments, such as steam engines, railways, motorised road vehicles,
telegraph, electric power supplies, gas power supplies, aircraft, semiconductors;

* 2. novel phenomena which may turn into a technology, such as superconduction, quantum computation,
neural-network computing, gene editing.

* 3. novel and putative phenomena which don't progress into technology, such as cold fusion.


With regard to 1, I would say there are three stages.
* 1.1 At the beginning, (a) individual incidents are (b) resolved through local negotiation
(usually, if a victim is socially or politically unimportant, the incident is regretted but ignored;
changes may be made if a victim is more important); then
* 1.2 (a) accumulations of incidents lead to (b) a political concern of some sort as well as market
consequences for the companies offering the technology; then
* 1.3 there is an evolution of the various political and social forces and the resolutions.

With regard to 2, ethicists and technologists write articles about possible future developments, and
then when there actually becomes an application with safety-related consequences, the debate is
already in progress. 1.1.a occurs (consider the video of the self-parking Volvo on a demo hitting an
observer; or the fender-benders in Silicon Valley with self-driving cars) with 1.1.b (anybody
actually read about how those incidents were resolved?) but it moves pretty quickly to stage 1.2.b
(consider the DMV reaction to videos of Uber self-driving cars in SF running red lights) and, one
presumes, 1.3 thereafter.

With regard to 3, if there are no safety-related consequences then the phenomena may remain of
interest to academics and hobbyists, but it doesn't reach stage 1.1 by hypothesis, let alone 1.2.

PBL

Prof. Peter Bernard Ladkin, Bielefeld, Germany
MoreInCommon
Je suis Charlie
Tel+msg +49 (0)521 880 7319  www.rvs-bi.de





-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 163 bytes
Desc: OpenPGP digital signature
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20170421/0ec3d461/attachment.pgp>


More information about the systemsafety mailing list