Robot killer ethics, continued

The New York Times has a great read today on Dr. Ron Arkin’s research for the US Army. Dr. Arkin recognizes that armed vehicles, including aircraft, are only a few years away from the ability to autonomously kill people on the battlefield. I’ve written about his research before here: “On the ethics of killing with robots“.

The NYT article says:


For example, in one situation playing out in Dr. Arkin’s computers,a robot pilot flies past a small cemetery. The pilot spots a tank atthe cemetery entrance, a potential target. But a group of civilians hasgathered at the cemetery, too. So the pilot decides to keep moving, andsoon spots another tank, standing by itself in a field. The pilotfires; the target is destroyed.

In Dr. Arkin’s robotic system,the robot pilot would have what he calls a “governor.” Just as thegovernor on a steam engine shuts it down when it runs too hot, theethical governor would quash actions in the lethal/unethical space.



Some of you also might remember the US Navy brief I’ve posted here before, which proposes a possible solution to this problem. Program the robots to aim and fire only at weapons, not people. Under the laws of armed conflict, anyone who dies would be considered “collateral damage”. (I’m not recommending this solution, but I know it has been proposed in the Pentagon.)



Subscribe

Subscribe to our e-mail newsletter to receive updates.

10 Responses to Robot killer ethics, continued

  1. John S. 25 November, 2008 at 6:01 pm #

    So much for Dr. Asimov’s Three Laws of Robotics. (Four if you count the Zeroth Law…)

  2. Royce 25 November, 2008 at 6:17 pm #

    Seems like an odd subject to bring up now after countless people have been killed using land and sea mines. Does it really matter whether the new weapons have cool robot brains? Autonomous killing has been going on for generations.

    “Ethical governor”? Give me a break.

  3. Stephen Trimble 25 November, 2008 at 6:47 pm #

    Mines are a good example. The Navy brief I link to also mentions the example of the AIM-120, which turns on its sensor after entering a kill box.

    I think the point is between active and passive decision-making. A mine doesn’t choose to pull the trigger. It just lays there until it blows up. An autonomous UAV makes a discrete choice. There is a difference, no?

  4. Royce 25 November, 2008 at 9:13 pm #

    Arguably there’s a difference, but it seems minor to me. The concept is the same: Under X condition, do something and detonate. In reality, that’s all the sophisticated robot brain is going to do. It’s not going to be making a moral judgment. It’s just going to be programmed to respond a certain way under a set of anticipated conditions.

    The U.S. military has fought long and hard to keep its freedom to lay mines when it determines it is militarily necessary. Given that mines are employing more simplistic targeting than the AI at issue here, we wouldn’t expect the military to have a tighter ethics policy for use of robots than it does for mines.

  5. Starviking 26 November, 2008 at 12:24 pm #

    The problem with a simple logic like ‘Under X condition, do Y’ is that the programmer cannot account for all the ways this condition can be triggered.

    Look at all the reports of wedding parties getting attacked in Afghanistan because they firing guns in the air in celebration. That’d probably be an automatic ‘killer robot’ attack.

    There’s also the question if sensors are up to discriminating such false-targets.

    The other thing is that it takes the decision making out of the command structure completely.

    As for the mine analogy – mines aren’t mobile, and they do not have to decide who to attack. The ethical implication of someone wandering into a clearly marked minefield (I am assuming the US Army would clearly mark them) are very different from those that would apply to an autonomous munition.

  6. Royce 26 November, 2008 at 2:31 pm #

    “As for the mine analogy – mines aren’t mobile, and they do not have to decide who to attack.”

    Oh, but they do decide who to attack. It’s just a mechanical decision making process rather than an electronic one. And whether or not the mine is mobile or not is another minor difference when the issue is whether the killing is down without a human brain deciding whether to trigger the weapon (and note that some naval mines are mobile).

    A machine is not going to make decisions the way a human brain does. Two different humans are capable of making completely opposite decisions based on the same set of circumstances. The military isn’t going to want that level of unpredictability in a machine. So the machine will be programmed to respond to a certain set of conditions before firing. It won’t be capable of moral choices or dealing with the finer shades of the rules of engagement.

  7. Starviking 27 November, 2008 at 9:28 am #

    Royce,

    I cannot disagree with you more. If we go with your example of ‘mechanical decision making’ then we have even tree branches making decision when to fall.

    As for the likely differences between descision making in machines and humans – agreed. The problem is this: if a machine is programmed to respond to a certain set of conditions, the programmers have to cover all likely permutations of those conditions.

    Example 1: Strike enemies with heavy weapons as quickly and as hard as possible.

    Problem: What if the enemy is holed up in a hospital, school or place of worship? Big consequences – especially if the enemy knows the programmed conditions and uses them to achieve a propaganda coup.

    Example 2: Attack armed insurgents whenever possible.

    Problem: How do you define an armed insurgent? If it’s guy with a gun then you’re attacking all adult males in Afghanistan. How about covert ops teams?

    I’m sure there are many more exceptional conditions that will catch out the machine programmers.

  8. Mike Wheatley 28 November, 2008 at 9:30 pm #

    Starviking,

    You are greatly over-estimating the intelligence of computers.
    They have no idea what the words “strike”, “attack”, “armed”, “insurgent”, “enemy”, or “weapon” mean, not a clue about the concepts “quickly” or “hard”, and the entire concept of “possible” is alien to the basis of their programming. (There is no try or not-try for a computer, only do or not-do.)

    The weapon release system will be no different in concept to a modern Aegis or Patriot SAM system. It will analyse its sensor readings, generate a conclusion, and pass that on to the human operators, for them to press the “fire” button. The errors that occur will be the same. The only difference is that the human fire authorisation officer will be in Portsmouth, not on the ship / ground battery / F-16.

    The (proposed) autonomy will be in the navigation system. In addition to direct commands as to where to go, the machine will also use the sensor analysis to determine where to go towards / away from, and will change its course without asking.

    The next step from that, is to parse the sensor readings for “a missile comming towards you” and in addition to trying to avoid it, to deploy decoys.

  9. Starviking 1 December, 2008 at 10:48 am #

    Hi Mike,

    I was thinking of those words in the context of the requirements documents that would be drawn up for such a machine. I agree on the computer intelligence problem.

    Your description of the machine is certainly closer to reality than the one described in the article Stephen linked.

    Cheers

  10. Matt 1 December, 2008 at 4:59 pm #

    “(There is no try or not-try for a computer, only do or not-do.)”

    Well, there is a try, it’s just called an error state.

Leave a Reply