History and Ethics of Military Robots
This article gives an overview of U.S. military robots, and discusses a bit around the issues regarding their use in war:
As military robots gain more and more autonomy, the ethical questions involved will become even more complex. The U.S. military bends over backwards to figure out when it is appropriate to engage the enemy and how to limit civilian casualties. Autonomous robots could, in theory, follow the rules of engagement; they could be programmed with a list of criteria for determining appropriate targets and when shooting is permissible. The robot might be programmed to require human input if any civilians were detected. An example of such a list at work might go as follows: “Is the target a Soviet-made T-80 tank? Identification confirmed. Is the target located in an authorized free-fire zone? Location confirmed. Are there any friendly units within a 200-meter radius? No friendlies detected. Are there any civilians within a 200-meter radius? No civilians detected. Weapons release authorized. No human command authority required.”
Such an “ethical” killing machine, though, may not prove so simple in the reality of war. Even if a robot has software that follows all the various rules of engagement, and even if it were somehow absolutely free of software bugs and hardware failures (a big assumption), the very question of figuring out who an enemy is in the first place—that is, whether a target should even be considered for the list of screening questions—is extremely complicated in modern war. It essentially is a judgment call. It becomes further complicated as the enemy adapts, changes his conduct, and even hides among civilians. If an enemy is hiding behind a child, is it okay to shoot or not? Or what if an enemy is plotting an attack but has not yet carried it out? Politicians, pundits, and lawyers can fill pages arguing these points. It is unreasonable to expect robots to find them any easier.
The legal questions related to autonomous systems are also extremely sticky. In 2002, for example, an Air National Guard pilot in an F-16 saw flashing lights underneath him while flying over Afghanistan at twenty-three thousand feet and thought he was under fire from insurgents. Without getting required permission from his commanders, he dropped a 500-pound bomb on the lights. They instead turned out to be troops from Canada on a night training mission. Four were killed and eight wounded. In the hearings that followed, the pilot blamed the ubiquitous “fog of war” for his mistake. It didn’t matter and he was found guilty of dereliction of duty.
Change this scenario to an unmanned system and military lawyers aren’t sure what to do. Asks a Navy officer, “If these same Canadian forces had been attacked by an autonomous UCAV, determining who is accountable proves difficult. Would accountability lie with the civilian software programmers who wrote the faulty target identification software, the UCAV squadron’s Commanding Officer, or the Combatant Commander who authorized the operational use of the UCAV? Or are they collectively held responsible and accountable?”
The article was adapted from his book Wired for War: The Robotics Revolution and Conflict in the 21st Century, published this year. I bought the book, but I have not read it yet.
Related is this paper on the ethics of autonomous military robots.
Leave a comment