Examining a “Reasoned Debate About Armed Autonomous Systems”

In the article “We should not ban ‘Killer Robots’, and here’s why”, Evan Ackerman responded to an open letter signed by over 2500 AI and robotics researchers. He argued that offensive autonomous weapons should not be banned, and research on such technology should be supported. At the end of the article, Ackerman calls special attention to the use of the term “killer robot”. He claims that some people working in the AI and robotics field have been using this term to frighten others into agreeing on banning autonomous weapons, and that we should really “call for reasoned debate about armed autonomous systems”. While I agree with him that we should not let emotion drive our debate on this topic, this might be the only one of his points that I agree with.

Ackerman’s main arguments have been very well summarized by Stuart Russell, Max Tegmark and Toby Walsh in their response to his paper, published in the same year:

“(1) Banning a weapons system is unlikely to succeed, so let’s not try. (2) Banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil. (3) The real question that we should be asking is this: Could autonomous armed robots perform more ethically than armed humans in combat? (4) What we really need, then, is a way of making autonomous armed robots ethical.”

Admittedly, human, rather than technology itself, is truly the one to blame when some technology is used for evil. But Ackerman might have failed to understand that the AI and robotics researchers are not trying to ban the technology itself, but they just want to prevent a global arm race of AI weapons before it starts. Think about biological weapons, chemists and biologists are pushing the boundary of technology further each day, and the world community is certainly supportive of that. However, we have rather successfully banned biological weapons, because they are notoriously dangerous and unethical. The same applies to autonomous weapons: AI is a fascinating field, where tons of great opportunities arises, but autonomous weapons as a subfield would not be beneficial for our world.

In addition, regarding to Ackerman’s proposition of making autonomous weapons ethical, Russell, Tegmark and Walsh have made an excellent counterargument: “how would it be easier to enforce that enemy autonomous weapons are 100 percent ethical than to enforce that they are not produced in the first place?” From what I know about artificial intelligence, no matter how “intelligent” they are, the AIs still have to follow the logic designed and enforced by human programmers. Therefore, if Ackerman wishes to make autonomous weapons ethical, he will have to make sure that no AI designers meddle with the logic and turn their robot into a cold blooded killing machine. Is that an easier task to do than simply banning all autonomous weapons? I can hardly say yes.

When I examine this reasoned debate, I definitely believe than banning autonomous weapons is an urgent and important task. As someone who wants to work with machine learning and artificial intelligence in the future, I deeply agree with this line from the original letter: “most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so.” — Yuyan