[SystemSafety] The Moral Math of Robots

Matthew Squair mattsquair at gmail.com
Mon Mar 14 12:32:00 CET 2016


As far as 'war bots' go the current edge of thinking is that we'll fit them with a 'moral governor', and all will be well. I doubt this will work as a) any AI worth it's salt would immediately start to game the moral constraints, and b) the current constraints set is very complex, and generally compliance is figured out after the event.  

What we're getting into is the automation of higher level decisions traditionally performed by humans which axiomatically involves making ethical decisions. Much as we've found automation flaky and brittle at lower levels of decision making I predict we'll find it so for the higher. And does everyone realize how many of these wee beasties there'll end up being? And the implications of that? 

Of course some software engineer sitting in Tokyo or Detroit or Wolfsburg is no doubt firmly convinced they can write some snappy code to deal with such problems. And they undoubtedly will... write such code. 

Personally I think that (like the challenge of recombinant DNA before) we need to put the brakes on (no pun intended) until we have an agreed set of enforceable guidelines, see the 1975 Asilomar conference as an example of how to do it. 

Steps off soap box 

Matthew Squair

MIEAust, CPEng
Mob: +61 488770655
Email; Mattsquair at gmail.com
Web: http://criticaluncertainties.com

> On 14 Mar 2016, at 9:28 PM, Daniel Grivicic <grivsta at gmail.com> wrote:
> 
> I thought about this same problem quite briefly yesterday.
> 
> Some current research [1]:
> 
> "people often feel a utilitarian instinct to save the lives of others and sacrifice the car’s occupant, except when that occupant is them"
> 
> From the safety of my lounge chair I'd pick the oak tree. There are still unknowns (speed, conditions, etc..) but ultimately I guess I would be the sacrifice. As noted in the reference the sales person's pitch may be different. 
> 
> My understanding that as a passenger in a vehicle you should sit behind the driver as they generally try to avoid hurting themselves.
> 
> Are there any current standards which could be a starting point?
> 
> [1] http://theconversation.com/of-cats-and-cliffs-the-ethical-dilemmas-of-the-driverless-car-49778
> 
> Cheers,
> 
> Daniel.
> 
> 
> 
> 
> 
>> On Sun, Mar 13, 2016 at 11:01 PM, Les Chambers <les at chambers.com.au> wrote:
>> All
>> 
>> My hometown Brisbane is currently hosting the world science Festival. This afternoon I attended a thought-provoking session entitled The Moral Math of Robots. The session addressed the question, "Can machines learn right from wrong? As the first generation of driverless cars and battlefield war bots filter into society, scientists are working to develop moral decision making skills in robots. Break or swerve? Shoot or stand down? ...
>> 
>>  
>> 
>> An interesting thought came up during the session. It was triggered by a variation on the well-known trolley problem (https://www.washingtonpost.com/news/innovations/wp/2015/12/29/will-self-driving-cars-ever-solve-the-famous-and-creepy-trolley-problem/).
>> 
>>  
>> 
>> Picture this:
>> 
>> You are in your driverless car progressing down a two lane road with oncoming traffic. Without warning a pedestrian moves from behind a large oak tree growing on the footpath and steps right in front of your vehicle. At the same time a vehicle is heading your way in the oncoming lane. The laws of physics dictate your vehicle cannot stop in time to avoid the pedestrian. There are three options:
>> 
>> 1. Mow down the pedestrian
>> 
>> 2. Swerve into the oncoming lane with the certainty of a head-on collision
>> 
>> 3. Swerve onto the footpath with the certainty of a collision with the oak tree.
>> 
>>  
>> 
>> What ups the ante on this hitherto academic problem is that it is now real. And worse, a driverless car systems engineer has already made a decision for us on the control actions to be taken in this class of scenario.
>> 
>> The best of a bad lot of solutions is probably the collision with the oak tree. Given that the vehicle will have air bags the probability of harm is reduced.
>> 
>>  
>> 
>> But it doesn't end there. The goodness of this obvious solution is a matter of opinion.
>> 
>> Picture this: you go down to your local driverless car dealer ready to pony up money for your first shiny new robotic chauffeur and you ask the sales guy this giant killing question, "Given {the above scenario} is this car programmed to sacrifice me or the pedestrian?"
>> 
>> An honest person might answer, "Well you of course, it's the only logical thing to do."
>> 
>> A sales person on track for sales leader of the year might answer, "Why the pedestrian course, he was careless, he had it coming."
>> 
>>  
>> 
>> Are you going to buy a car that is programmed to sacrifice you?
>> 
>> Are you going to buy a car that is programmed to mow down pedestrians in an emergency?
>> 
>>  
>> 
>> Personally I don't like either solution. I'd put my money in my pocket and go home (I'll stick with my own decision making process for mowing people down).
>> 
>>  
>> 
>> Referring to a previous discussion, is this not a case for "unambiguous standards".
>> 
>> The solution to this problem cannot be left in the hands of individual vendors. This is international standards territory. We need an international ethical consensus on whether we should mow down pedestrians or sacrifice passengers. Unless this question is settled, no vendor will be able to sell these vehicles.
>> 
>>  
>> 
>> Ladies and gentlemen, we are embarked, which will it be?
>> 
>>  
>> 
>> Cheers
>> 
>> Les
>> 
>>  
>> 
>> -------------------------------------------------
>> Les Chambers
>> Director
>> Chambers & Associates Pty Ltd
>> www.chambers.com.au
>> 
>> Blog: www.systemsengineeringblog.com
>> 
>> Twitter: @ChambersLes
>> M: 0412 648 992
>> Intl M: +61 412 648 992
>> Ph: +61 7 3870 4199
>> Fax: +61 7 3870 4220
>> les at chambers.com.au
>> -------------------------------------------------
>> 
>>  
>> 
>> 
>> _______________________________________________
>> The System Safety Mailing List
>> systemsafety at TechFak.Uni-Bielefeld.DE
> 
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20160314/97f9c447/attachment.html>


More information about the systemsafety mailing list