[SystemSafety] Failure Modes in ML

Bruce Hunter brucer.hunter at gmail.com
Tue Dec 17 00:31:13 CET 2019


Thanks Peter,

I found this is also available in correctly formatted PDF at https://arxiv
.org/abs/1911.11034

Victoria Krakovna also maintains an AI safety resources page which includes
database records of AI failures. This is at  https://vkrakovna.wordpress
.com/ai-safety-resources/

Artificial Intelligence, Machine Learning and Deep Learning are evolving
fields, especially when it comes to dependability aspects. The Cyber-Physical
Systems and Cognitive Systems domain are sort of at the stage when IT was
rapidly introduced to OT systems back around the turn of the century
(sounds so old to say that).

I think it is a bit too early to dismiss ML failures as just software or
systematic failures. True ML failures like systematic failures  (IEC
61508-4 3.6.6) are "related in a deterministic way to a certain cause "; in
this case the "learning" process; environment; and data. Can it be
"eliminated by a
modification of the design or of the manufacturing process"? It also
depends, somewhat, on whether learning is part of the design process and
then when validated, locked baselined or whether the system continues to
learn and thus acquires more "systematic faults".

AI is an interesting field an not necessarily well thought through on the
possible side effects, as exhibited in other fields of application. In my
opinion, with disruptive technology, governments act a bit like the Emperor
in Andersen's fairy tale; they are wowed by the buzz but not willing to
contemplate the risks. The Australian Government learnt this with its crazy
Robodebt Scheme for welfare fraud which, after many years and much damage
to vulnerable people, they had to admit its failure. The Australian Human
Rights Commission (yes we do have one) released a report on this today at
https://tech.humanrights.gov.au/sites/default/files/inline
-files/TechRights2019_DiscussionPaper_Summary.pdf

For those that want more on AI and ML failures on humans and ethics, there
is a good read "Made by humans" by Ellen Broad.

I don't think we need to call in Sarah Connor just yet, though ;-)

Bruce Hunter

On Mon, 16 Dec 2019 at 00:35, Peter Bernard Ladkin <ladkin at causalis.com>
wrote:

> For those interested in how current analytical methods can adapt to new SW
> techniques, here is a
> taxonomy of ML failure modes:
>
> https://docs.microsoft.com/en-us/security/failure-modes-in-machine-learning
>
> It was noted by Bruce Schneier, who is at Berkman-Klein, where some of the
> authors are, and
> published in his Crypto-Gram today, which is where I got it from.
>
> PBL
>
> Prof. Peter Bernard Ladkin, Bielefeld, Germany
> MoreInCommon
> Je suis Charlie
> Tel+msg +49 (0)521 880 7319  www.rvs-bi.de
>
>
>
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription:
> https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20191217/db849692/attachment.html>


More information about the systemsafety mailing list