On Autonomous Weapons

Our readings on autonomous weapons featured some very direct back and forth on the idea of banning “killer robots.” I think the issue can be split into three broad categories, focusing on the ethics of the development and use of autonomous weapons, the issues they face in international law, and the practicality of their use and prohibition.

Ethics. Gubrud raises the idea that it is contrary to the principles of a shared humanity to allow machines to determine an end to human lives. There is some value in humans making the decision the decision to kill. Opponents to this idea believe that humans killing other humans is no more ethical than robots killing humans, and that the substantive question in this issue relates to matters of practicality. Is it more ethical for a human to be the decisionmaker, and if so, is it enough reason to oppose the development of these weapons?

International Law. Gubrud also presents the argument that autonomous weapons should already be illegal under international law. He argues that robots cannot satisfy the principles of distinction and proportionality which determine just conduct in war; AI can neither reliably distinguish combatants from noncombatants nor weigh collateral damage against military gain. Ackerman opposes this view in his article, claiming that the codified Rules of Engagement are something that an AI can certainly understand and base decisions upon; Gubrud mentions the US’s “collateral damage estimation methodology”, which could serve as a base for a robot to determine proportionality. Neither side claims that the data-gathering and decision-making abilities of the technology is adequate to meet legal requirements yet; in your opinion, will it ever be? What advantages would robots have in this regard, and what challenges would you anticipate for those working on this technology?

On a different note legally, Gubrud also brings up the Martens Clause, supporting the idea that the strong public consensus against autonomous weapons can also determine the standing of autonomous weapons in international law. What role should public opinion play in this legal question, and what should be considered along with public opinion?

Practicality. There are a number of issues related to the practical implications of the development or ban of autonomous weapons.

First, would a ban even be effective? Gubrud points to an already developing international consensus for caution with the technology as a sign that a ban could develop and work, and he, Russell, Tegmark, and Walsh point to successes in banning other types of weapons. Ackerman counters by claiming that robots offer too much of a technological advantage for a state to resist and that the technology is too accessible, even to regular citizens, to effectively control. Trying to ban the tech would be a waste of effort better devoted to preventing abuse. We’ve studied weapons bans as they relate to nuclear, chemical, and biological weapons; is the issue of controlling autonomous weapons fundamentally different? What effects would a ban have on the use of robots for domestic suppression? Terrorism? Are there alternate means to prevent abuses?

Another aspect to consider will be the effect on international stability. With no emotional attachment to these robots, and little political cost for their loss, will they lead to riskier, more aggressive, and more frequent military actions? What are the prospects for an arms race featuring dozens of countries, similar to the broad interest and investment in drone technology today?

What will be the effects on consumer technology? The open letter opposing the development of autonomous weapons argues that public backlash against killer robots will hurt support for the entire fields of robotics and AI. Ackerman alludes to the idea that military research is a key driver of progress in consumer technology.

Finally, is there any aspect of the debate that these authors failed to address? — Trevor