//
you're reading...
politics, technology

State of the art in AI battlefield ethics


Can programmers specify a sufficient code of ethics to govern autonomous robots in battle? I believe the reasoning falls down in many places.

First, life is much more complex than the proposed codes below permit. “Harm” is an extremely complex concept based not only in physiology but also human dignity, local norms, and learned patterns of interaction. Obedience doesn’t cut the mustard. Second, ethical dilemmas are difficult problems, meaning that experts with a lifetime of experience can disagree on the content of the law, the hierarchy of norms, and the intent of the lawmaker. Third, the responsibility for the robot’s actions cannot rest fully with the robot in our system. Both the manufacturer/programmer and the operator of the robot share in the responsibility for the robot’s actions, and they are full and complete members of society. Dogs and children often put their owners and parents on the hook for negligent behavior, exactly because they lack the ethical competence of full personhood. Since we assume they are incompetent to make good decisions, ethics requires adults to treat them differently. Who would want K-9 commandos, no matter how lethal in combat, to have tactical authority over target identification and lethal use of force?

Dr. Ronald C. Arkin, Regents’ Professor and Associate Dean for Resarch at the School of Interactive Computing at Georgia Tech, was recently profiled in an h+ article

As reported in a recent New York Times article, Dr. Arkin describes some of the potential benefits of autonomous fighting robots. They can be designed without a sense of self-preservation and, as a result, “no tendency to lash out in fear.” They can be built without anger or recklessness and they can be made invulnerable to what he calls “the psychological problem of ‘scenario fulfillment,’ ” that causes people to absorb new information more easily if it matches their pre-existing ideas.

The SF writer Isaac Asimov first introduced the notion of ethical rules for robots in his 1942 short story “Runaround.” His famous Three Laws of Robotics state the following:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The Laws of War (LOW) and Rules of Engagement (ROE) make programming robots to adhere to Asimov’s Laws far from simple. You want the robots to protect the friendly and “neutralize” enemy combatants. This likely means harming human beings on the battlefield.

In his recent book, Governing Lethal Behavior in Autonomous Robots, Dr. Arkin explores a number of complex real-world scenarios where robots with ethical governors would “do the right thing” –- in consultation with humans on the battlefield. These scenarios include ROE and LOW adherence (Taliban and Iraq), discrimination (Korean DMZ), and proportionality and tactics (urban sniper).

Arkin’s “rules” end up altering Asimov’s rules to look more like these:

  1. Engage and neutralize targets as combatants according to the ROE.
  2. Return fire with fire proportionately.
  3. Minimize collateral damage — intentionally minimize harm to noncombatants.
  4. If uncertain, invoke tactical maneuvers to reassess combatant status.
  5. Recognize surrender and hold POW until captured by human forces.

Also by Dr. Arkin: Ethical Robots in Warfare.

Is it not our responsibility as scientists to look for effective ways to reduce human inhumanity to other people through technology? And if such inhumanity occurs during warfare, what can be done? It is my belief that ethical military robotics can and should be applied towards achieving this end.

Should soldiers be robots? Isn’t that what they are trained to be?

Should robots be soldiers? Could they be more humane than humans? … It is my conviction that as these weaponized autonomous systems appear on the battlefield, they should help ensure that humanity, proportionality, responsibility, and relative safety, as encoded in the Laws of War, are extended during combat not only to friendly forces but equally to noncombatants and those who are otherwise hors de combat, with the goal being a reduction in loss of life in civilians and all other forms of collateral damage.

These views on the soldier’s profession are radically oversimplified, and though I am not a soldier I would not be surprised if some took offense at the proposition that they are trained to be robotic. Personal responsibility and devolved authority are hallmarks of the American military tradition.

Arkin’s line of argument misses the point: when soldiers make lethal decisions under orders, they bear responsibility for those decisions. They are prepared for those decisions by a lifetime of education and participation in human society. Short of raising and educating robots as we would our own children, I am skeptical that programmers can provide a sufficient, complete description of battlefield behavior for robots that would permit ethical behavior.

We do not fully specify in advance the decision algorithms that soldiers use to evaluate conflicting sources of information, such as the reliability of informants, the chance that combatants are disguised among innocents, and the likelihood of a lethal attack at any given moment. In particular, I am skeptical that programmers can teach robots sufficiently nuanced decisions for identifying combatants, targets, and non-combatants. These errors will have both tactical and strategic consequences for US forces. Combat robots should be kept on a tight leash.

About these ads

About Ben Mazzotta

Ben Mazzotta is a postdoc at the Center for Emerging Market Enterprises (CEME). His study of the Cost of Cash is part of CEME's research into inclusive growth.

Discussion

2 thoughts on “State of the art in AI battlefield ethics

  1. Ben, I found your post on military AI ethics interesting. My question would be, who exactly is supposed to be keeping combat robots ‘on a tight leash?’ Is this not precisely the aim of Arkin’s focus on ‘governance’? When you state “Personal responsibility and devolved authority are hallmarks of the American military tradition.” are you suggesting it would be hard to program a robot with the requisite Individualism, compared with a human soldier? (See my post at fourcultures.com on the same subject for more, and you might also be interested in the site of the Cultural Cognition project at Yale…)

    Posted by fourcultures | August 15, 2009, 9:39 am
  2. You’ve put your finger on the right question: who is running the show with battlefield robots? My argument is not that _ethics_ are inappropriate for robots, but _autonomy_ of battlefield robots is not always a good thing. War is a human activity. Humans must be responsible for decisions taken on the battlefield.

    The best trained troops in the world find the ethics and morals of tactical and strategic decisions extremely taxing, even with all the benefits of education, culture, and centuries of tradition. I would be very surprised to see a code of ethics that prepared robots to make ethical, human decisions in the face of unforeseen circumstances, i.e., the fog of war.

    In the sentence you quoted, I am suggesting that the comparison between robots and soldiers is mistaken. Infantrymen do follow orders, but not in the way that robots and computers do. Computers follow whatever orders they are given to the letter. Responsibility returns to the programmer and the operator in case of error. Soldiers obey the intent of the orders they are given, consistent with dictates of combat, law, honor, and duty. Personal responsibility is paramount in military training.

    I will certainly read your work at fourcultures.com and CC/Yale. Thank you for your insightful comments.

    Posted by Ben Mazzotta | August 17, 2009, 7:54 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Pages

CC License

Bookmark and Share
August 2009
M T W T F S S
« Jul   Sep »
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Riposte

People mentioned in this blog are hereby invited to post a reply, on this blog, to any remarks, disparaging or otherwise, that I make here.

For that matter, if you're an interested reader and you'd like to share your thoughts, I would welcome proposals for cross-posting at your blog, guest blogging, and other creative ideas you may have.
Follow

Get every new post delivered to your Inbox.

%d bloggers like this: