[SystemSafety] AI and safety

Olwen Morgan olwen at phaedsys.com
Mon Nov 12 11:41:03 CET 2018


 From previous experience, I have the impression that there may be 
benefit in using limited rule-based SCADA systems on flight-decks.

One example relates to fuel consumption. When I was once working for a 
manufacturer of large commercial aircraft, some of my colleagues were 
looking at ways to improve SCADA instrumentation for fuel systems. A 
case in point was the near-disaster Air Transat Flight 236 that ran out 
of fuel in mid-Atlantic and was extremely lucky to make a safe landing 
at Lajes in the Azores. A very simple rule-based system could have 
checked fuel consumption/remaining fuel against the flight plan and 
warned the pilots very early on that something was wrong.

I'm minded to suggest that a beneficial use of rule-based techniques 
would be to check the truth of required system-state invariants 
continually throughout the flight. While being rule-based, this could, 
AFAI can see, be done without embracing full-on AI, whose safety 
characteristics would be hard to analyse. I suspect that several 
worthwhile safety improvements could be had by straightforward, 
incremental use of rule-based or data-fusion-based invariant-check 
methods across the whole range of airborne systems.

Better, perhaps, to pick the easy-win, low-hanging fruit before selling 
our souls wholesale to the AI hype-merchants?

Olwen



On 12/11/2018 10:09, Rob Alexander wrote:
> The DeepMind safety blog has taken to calling this the distinction
> between the "revealed" and "ideal" specifications ---
> https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1
>
> (This is, of course, not a problem restricted to machine learning, or
> even to software. Machine learning just provides new ways to achieve
> such a discrepancy)
>
>
> Rob
> On Mon, 12 Nov 2018 at 08:58, Matthew Squair <mattsquair at gmail.com> wrote:
>> For the short/medium term AI challenges these issues could be characterised as ‘The Monkey’s Paw’ problem that is you get what you specify but it’s sure not what you want. Unfortunately with AI we can’t just wish away the results…
>>
>> I think it was Nancy Leveson who pointed out in Safeware that safety should always be evaluated against what is desired rather than what is specified, because the specification can always be wrong.
>>
>> Regards,
>>
>> On 12 Nov 2018, at 12:22 pm, Dariusz Walter <dariusz at walterki.com> wrote:
>>
>> Based on the brief descriptions in the database, all the AI solutions seem to play within the set rules that are explicitly defined. None of the AI solutions seemed to have "failed" any of the set rules. They definitely seem to be reliable. In fact, on review of the issues in the database, I find the AI solutions ingenious.
>>
>>
>>
>> If anything, in my mind they identify the holes/gaps/assumptions present in the explicit set of rules defined for the task, including
>> 1. the specifications/behaviour of the environments that these AI systems are grown to work in
>> 2. completeness/correctness of the rules/cost functions that these AI systems are supposed to meet
>>
>>
>>
>> Imagine if some of the cost criteria/requirements that these AI systems are grown for were safety requirements. It would be interesting to see what ingenious solutions would be identified, and if they would in fact be safe to a human interpretation.
>>
>> E.g. If harm to humans as interpreted through a thermal sensor means the thermal readings do not drop or go above a certain level, then say, chopping the humans head off and putting them on the right setting in a slow cooker may be a interpreted as a perfectly safe solution...
>>
>> It almost seems like the bar for defining the simulation environment would need to be raised to the same level as for defining the safety requirements in order to even begin a claim for AI safety. In the case where the AI is interacting with the real world, translation of the safety requirements into those that can be obtained through the AI's sensors needs to be closely considered.
>>
>> In either case, I look forward to the translation of the current legalese definition of "safe" into an unambiguous set of rules/requirements for AI consumption.
>>
>> Dariusz
>>
>> On Sat, Nov 10, 2018 at 10:32 PM Olwen Morgan <olwen at phaedsys.com> wrote:
>>>
>>> On 10/11/2018 04:50, Peter Bernard Ladkin wrote:
>>>
>>> <snip>
>>>
>>> There is no connection made from these features which consitute "AI
>>> safety" to harm caused to or the environment, and damage to things,
>>> avoidance of which is the usual definition of safety.
>>>
>>> <snip>
>>>
>>>
>>> With due respect, Peter, this seems to me to be missing the wood for the
>>> trees. The only way we'll ever address the problems associated with
>>> using AI in critical systems is to build experience of what can go
>>> wrong. AFAI can see (maybe wrongly - it's not my field) with current
>>> knowledge, we would be hard pressed even to classify different types of
>>> AI cock-up. Until we can do that, we won't be able to devise effective
>>> systemic countermeasures.
>>>
>>> Olwen
>>>
>>> _______________________________________________
>>> The System Safety Mailing List
>>> systemsafety at TechFak.Uni-Bielefeld.DE
>>> Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
>> _______________________________________________
>> The System Safety Mailing List
>> systemsafety at TechFak.Uni-Bielefeld.DE
>> Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
>>
>>
>> _______________________________________________
>> The System Safety Mailing List
>> systemsafety at TechFak.Uni-Bielefeld.DE
>> Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
>
>


More information about the systemsafety mailing list