[SystemSafety] AI and safety

Matthew Squair mattsquair at gmail.com
Tue Nov 13 03:17:36 CET 2018


I may have made this comment before in this list, but most of Saltzer and Schroeder’s security principles also have an applicability system safety. 

Their ‘economy of mechanism’ principle is as good a justification as I know as to why you should keep safety functions small scale and simple. 

Link below.

https://criticaluncertainties.com/reference/saltzer-and-schroeders-principles/


Regards,



> On 12 Nov 2018, at 10:39 pm, paul_e.bennett at topmail.co.uk wrote:
> 
> On 12/11/2018 at 7:59 AM, "Dariusz Walter" <dariusz at walterki.com <mailto:dariusz at walterki.com>> wrote:
>> 
>> Based on the brief descriptions in the database, all the AI 
>> solutions seem
>> to play within the set rules that are explicitly defined. None of 
>> the AI
>> solutions seemed to have "failed" any of the set rules. They 
>> definitely
>> seem to be reliable. In fact, on review of the issues in the 
>> database, I
>> find the AI solutions ingenious.
>> 
>> If anything, in my mind they identify the holes/gaps/assumptions 
>> present in
>> the explicit set of rules defined for the task, including
>> 
>> 1. the specifications/behaviour of the environments that these AI 
>> systems
>> are grown to work in
>> 
>> 2. completeness/correctness of the rules/cost functions that these 
>> AI
>> systems are supposed to meet
>> 
>> Imagine if some of the cost criteria/requirements that these AI 
>> systems are
>> grown for were safety requirements. It would be interesting to see 
>> what
>> ingenious solutions would be identified, and if they would in fact 
>> be safe
>> to a human interpretation.
> 
> Based on the above, one could almost consider that using AI in the
> threat development landscape as a means to test out safety systems.
> In regards of generating a wide range of risk scenarios that could
> breach security and safety protocols, AI might become one basis
> to provide the harshest of tests.
> 
> Just a thought.
> 
> I still think that safety systems, that are as simple as possible, kept in
> a zone that is as secure as can be made, in situations that receive
> adequate and ongoing review throughout development and operation,
> is the way we do our best job. The standards are good guidance for
> how to get there, but that does not mean we cannot innovate to make
> the best systems.
> 
> Regards
> 
> Paul E. Bennett IEng MIET
> Systems Engineer
> Lunar Mission One Ambassador
> -- 
> ********************************************************************
> Paul E. Bennett IEng MIET.....
> Forth based HIDECS Consultancy.............
> Mob: +44 (0)7811-639972
> Tel: Due to relocation - new number TBA. Please use Mobile.
> Going Forth Safely ..... EBA. www.electric-boat-association.org.uk <http://www.electric-boat-association.org.uk/>..
> ********************************************************************
> 
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE <mailto:systemsafety at TechFak.Uni-Bielefeld.DE>
> Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety <https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20181113/32323f3b/attachment-0001.html>


More information about the systemsafety mailing list