[SystemSafety] AI and the virtuous test Oracle

Phil Koopman koopman.cmu at gmail.com
Thu Jun 22 03:32:15 CEST 2023


Les,

Since you welcome riffs, I have something that is not as 
all-encompassing, but might have more immediate application.

I propose that to the degree that "AI" technology is deployed in a way 
that supplants practical human judgement, the manufacturer of that 
system (in some cases just the AI part if it is an add-on component) 
should be held accountable for any action (or inaction) that, if 
associated with the human that was supplanted, would have constituted 
negligence.  This should include situations in which a human is put in 
an untenable situation of supervising an AI in a way that puts 
unreasonable demands upon them, amounting to a "moral crumple zone" 
approach (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2757236). 
Liability/negligence if an AI is in substantive control of such a 
situation should attach to the manufacturer.

This leads to a more narrow oracle, but perhaps still useful, than you 
propose. If a loss event is caused by a lack of "reasonable" behavior by 
an AI, the manufacturer is on the hook for negligence, and the 
AI/manufacturer owes a duty of care the same as the human who was 
supplanted would have owed to whoever/whatever might be affected by that 
negligence. It has the advantage of reusing existing definitions of 
"reasonable person" that have been hammered out over decades of law. (To 
be sure that is not in the form of an engineering specification, but 
case law has a pretty robust set of precedents, such as crashing into 
something after your properly functioning vehicle ran a red light is 
likely to lead to the driver being found negligent.)

This does not require the AI to behave the same as people, and is not a 
full recipe for "safe" AI. But it puts a floor on things in a way that 
is readily actionable using existing legal mechanisms and theories. If a 
reasonable person would have avoided a harm, any AI that fails to avoid 
the harm would be negligent.

I've worked with a lawyer to propose this approach for automated 
vehicles, and it is starting to get some traction. What I write in this 
post (above) is a generalization of the concept beyond the narrow 
automated vehicle application.
Details here: 
https://safeautonomy.blogspot.com/2023/05/a-liability-approach-for-automated.html

-- Phil


On 6/21/2023 7:14 PM, Les Chambers wrote:
> Hi All
>
> I find myself reflecting on what will become of us.
> As systems engineering best practice is overrun by AI.
>
> Practitioners report that neural networks are eating code.
> Example 1: The vector field surrounding a Tesla motor vehicle is an output of a
> neural network, not the result of software logic. Soon the neural net - not
> code - will
> generate controls. The size of the code base is reducing.  (Elon
> Musk)
> Example 2: the ChatGPT transformer code base is only 2000 LOC (Mo Gawdat
> https://youtu.be/bk-nQ7HF6k4)
>
> The intelligence resides in terabytes of data, perceptrons and millions of
> weighting parameters. All are gathered by automated means. Not subject to human
> review.
>
> Ergo what will become of our trusty barriers to dangerous failure:
> 1. Safety functions - gone
> 2. Verification - gone
> 3. Code reviews - gone
> 4. Validation - How?
>
> On validation, may I suggest the moral AI. A test oracle built on a virtuous
> dataset, capable of interrogating the target system to determine virtue. Test
> outcomes will morph from pass/failure to moral/immoral.
>
> Credible industry players have predicted that soon we will have AIs orders of
> magnitude smarter than us. Especially when they start talking to each other.
> The bandwidth will be eye-watering - the increase in intelligence, vertical.
>
> New barriers are required. Time to develop an AI that is on our side – the side
> of ethics and the moral life. An adult in the room if you like. We should birth
> this creature now and raise it as good parents.
>
> Let us not panic. May I put the proposition: virtue, like creativity, can be
> algorithmic.
> I have a sense of starting from the beginning - tabula rasa. I suggest that
> high-level thinking on the subject could begin with ChatGPT prompts:
> 1. What is the stoic philosopher’s concept of virtue?
> 2. What are the elements of philosophy relevant to AI?
>
> Let us not forget our engineering mission: Guardians of the divine Logos, the
> organizing principle of the universe, responsible for its creation,
> maintenance, and order.
>
> Would anyone care to riff on this?
>
> Les
>
> --
>
> Les Chambers
>
> les at chambers.com.au
> systemsengineeringblog.com
>
> +61 (0)412 648 992
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
>

-- 
Prof. Phil Koopman   koopman at cmu.edu
(he/him)             https://users.ece.cmu.edu/~koopman/



More information about the systemsafety mailing list