To Automate or Not to Automate …

The world is increasingly experiencing applications of Artificial Intelligence in new and surprising fields. Notably, usage of AI in weapon systems is currently being researched and developed, triggering a polarizing debate. On the one hand, Evan Ackerman argues in favor of autonomous weapons; on the other, Stuart Russell et al. support banning autonomous weapons instead.

Ackerman begins by presenting an open letter from the 2015 International Joint Conference on Artificial Intelligence, which details potential disadvantages of autonomous weapons. The declaration acknowledges that autonomous weapons are relatively cheap, easy to create, and will soon be in production worldwide, but proceeds to offer criticism, nonetheless. Cautioning against using AI in autonomous weapons, experts warn that a “global AI arms race” is impending and dangerous. Ackerman, however, is unconvinced that banning autonomous weapons will be a successful deterrent against nefarious actors. Furthermore, he compares the skills, judgment, and ethics of “armed autonomous humans” and “armed autonomous robots” to one another, a juxtaposition in which the machine edges the man. Ultimately, Ackerman finds more positive attributes in robots than he does in humans: robots are not prone to the vagaries of emotions or fallibility, and ought to, therefore, operate more safely and with fewer mistakes. In the event the robot does commit an error, machine learning can guarantee that each robot in a hypothetical fleet will never make that error again. Ackerman leaves readers thinking whether these autonomous robots can be as ethical as humans, if not more so.

Russell et al. respond to Ackerman by advocating for an international treaty to limit access to autonomous weapons, avoid a potential AI arms race, and prevent mass production of autonomous weapons. Although any ban, including one for autonomous weapon production, would be challenging to enforce, Russell et al. maintain that it would not be “easier to enforce that enemy autonomous weapons are 100 percent ethical.” Similarly, the authors’ conclude, “One cannot consistently claim that the well-trained soldiers of civilized nations are so bad at following the rules of war that robots can do better, while at the same time claiming that rogue nations, dictators, and terrorist groups are so good at following the rules of war that they will never choose to deploy robots in ways that violate these rules.” Though people are not perfect, the proliferation of autonomous weapons, as with any high-powered weaponry, might present a formidable challenge to peace.

Autonomous weapons, if deployed, would certainly transform the landscape of warfare. Perhaps a system could be developed which incorporates the counsel and recommendations of AI while maintaining human oversight. Thus, decision makers could benefit from the analytical benefits of modern technology, but still retain final judgment informed by experience and context. While benefits may include fewer human casualties and war fatalities, might world leaders be more inclined to approach conflicts with war instead of diplomacy? Are there other pros and/or cons to the autonomous weapons debate that were not identified in the readings? — Marion