Can programmers specify a sufficient code of ethics to govern autonomous robots in battle? I believe the reasoning falls down in many places.
First, life is much more complex than the proposed codes below permit. “Harm” is an extremely complex concept based not only in physiology but also human dignity, local norms, and learned patterns of interaction. Obedience doesn’t cut the mustard. Second, ethical dilemmas are difficult problems, meaning that experts with a lifetime of experience can disagree on the content of the law, the hierarchy of norms, and the intent of the lawmaker. Third, the responsibility for the robot’s actions cannot rest fully with the robot in our system. Both the manufacturer/programmer and the operator of the robot share in the responsibility for the robot’s actions, and they are full and complete members of society. Dogs and children often put their owners and parents on the hook for negligent behavior, exactly because they lack the ethical competence of full personhood. Since we assume they are incompetent to make good decisions, ethics requires adults to treat them differently. Who would want K-9 commandos, no matter how lethal in combat, to have tactical authority over target identification and lethal use of force?
Dr. Ronald C. Arkin, Regents’ Professor and Associate Dean for Resarch at the School of Interactive Computing at Georgia Tech, was recently profiled in an h+ article…
As reported in a recent New York Times article, Dr. Arkin describes some of the potential benefits of autonomous fighting robots. They can be designed without a sense of self-preservation and, as a result, “no tendency to lash out in fear.” They can be built without anger or recklessness and they can be made invulnerable to what he calls “the psychological problem of ‘scenario fulfillment,’ ” that causes people to absorb new information more easily if it matches their pre-existing ideas.
The SF writer Isaac Asimov first introduced the notion of ethical rules for robots in his 1942 short story “Runaround.” His famous Three Laws of Robotics state the following:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The Laws of War (LOW) and Rules of Engagement (ROE) make programming robots to adhere to Asimov’s Laws far from simple. You want the robots to protect the friendly and “neutralize” enemy combatants. This likely means harming human beings on the battlefield.
In his recent book, Governing Lethal Behavior in Autonomous Robots, Dr. Arkin explores a number of complex real-world scenarios where robots with ethical governors would “do the right thing” –- in consultation with humans on the battlefield. These scenarios include ROE and LOW adherence (Taliban and Iraq), discrimination (Korean DMZ), and proportionality and tactics (urban sniper).
Arkin’s “rules” end up altering Asimov’s rules to look more like these:
- Engage and neutralize targets as combatants according to the ROE.
- Return fire with fire proportionately.
- Minimize collateral damage — intentionally minimize harm to noncombatants.
- If uncertain, invoke tactical maneuvers to reassess combatant status.
- Recognize surrender and hold POW until captured by human forces.
Also by Dr. Arkin: Ethical Robots in Warfare.
Is it not our responsibility as scientists to look for effective ways to reduce human inhumanity to other people through technology? And if such inhumanity occurs during warfare, what can be done? It is my belief that ethical military robotics can and should be applied towards achieving this end.
Should soldiers be robots? Isn’t that what they are trained to be?
Should robots be soldiers? Could they be more humane than humans? … It is my conviction that as these weaponized autonomous systems appear on the battlefield, they should help ensure that humanity, proportionality, responsibility, and relative safety, as encoded in the Laws of War, are extended during combat not only to friendly forces but equally to noncombatants and those who are otherwise hors de combat, with the goal being a reduction in loss of life in civilians and all other forms of collateral damage.
These views on the soldier’s profession are radically oversimplified, and though I am not a soldier I would not be surprised if some took offense at the proposition that they are trained to be robotic. Personal responsibility and devolved authority are hallmarks of the American military tradition.
Arkin’s line of argument misses the point: when soldiers make lethal decisions under orders, they bear responsibility for those decisions. They are prepared for those decisions by a lifetime of education and participation in human society. Short of raising and educating robots as we would our own children, I am skeptical that programmers can provide a sufficient, complete description of battlefield behavior for robots that would permit ethical behavior.
We do not fully specify in advance the decision algorithms that soldiers use to evaluate conflicting sources of information, such as the reliability of informants, the chance that combatants are disguised among innocents, and the likelihood of a lethal attack at any given moment. In particular, I am skeptical that programmers can teach robots sufficiently nuanced decisions for identifying combatants, targets, and non-combatants. These errors will have both tactical and strategic consequences for US forces. Combat robots should be kept on a tight leash.