[SystemSafety] AI and safety

Dariusz Walter dariusz at walterki.com
Mon Nov 12 02:22:49 CET 2018


Based on the brief descriptions in the database, all the AI solutions seem
to play within the set rules that are explicitly defined. None of the AI
solutions seemed to have "failed" any of the set rules. They definitely
seem to be reliable. In fact, on review of the issues in the database, I
find the AI solutions ingenious.



If anything, in my mind they identify the holes/gaps/assumptions present in
the explicit set of rules defined for the task, including

1. the specifications/behaviour of the environments that these AI systems
are grown to work in

2. completeness/correctness of the rules/cost functions that these AI
systems are supposed to meet



Imagine if some of the cost criteria/requirements that these AI systems are
grown for were safety requirements. It would be interesting to see what
ingenious solutions would be identified, and if they would in fact be safe
to a human interpretation.


E.g. If harm to humans as interpreted through a thermal sensor means the
thermal readings do not drop or go above a certain level, then say,
chopping the humans head off and putting them on the right setting in a
slow cooker may be a interpreted as a perfectly safe solution...



It almost seems like the bar for defining the simulation environment would
need to be raised to the same level as for defining the safety requirements
in order to even begin a claim for AI safety. In the case where the AI is
interacting with the real world, translation of the safety requirements
into those that can be obtained through the AI's sensors needs to be
closely considered.


In either case, I look forward to the translation of the current legalese
definition of "safe" into an unambiguous set of rules/requirements for AI
consumption.


Dariusz

On Sat, Nov 10, 2018 at 10:32 PM Olwen Morgan <olwen at phaedsys.com> wrote:

>
> On 10/11/2018 04:50, Peter Bernard Ladkin wrote:
>
> <snip>
>
> There is no connection made from these features which consitute "AI
> safety" to harm caused to or the environment, and damage to things,
> avoidance of which is the usual definition of safety.
>
> <snip>
>
>
> With due respect, Peter, this seems to me to be missing the wood for the
> trees. The only way we'll ever address the problems associated with
> using AI in critical systems is to build experience of what can go
> wrong. AFAI can see (maybe wrongly - it's not my field) with current
> knowledge, we would be hard pressed even to classify different types of
> AI cock-up. Until we can do that, we won't be able to devise effective
> systemic countermeasures.
>
> Olwen
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription:
> https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20181112/0ae13ab4/attachment.html>


More information about the systemsafety mailing list