Examining a “Reasoned Debate About Armed Autonomous Systems”

In the article “We should not ban ‘Killer Robots’, and here’s why”, Evan Ackerman responded to an open letter signed by over 2500 AI and robotics researchers. He argued that offensive autonomous weapons should not be banned, and research on such technology should be supported. At the end of the article, Ackerman calls special attention to the use of the term “killer robot”. He claims that some people working in the AI and robotics field have been using this term to frighten others into agreeing on banning autonomous weapons, and that we should really “call for reasoned debate about armed autonomous systems”. While I agree with him that we should not let emotion drive our debate on this topic, this might be the only one of his points that I agree with.

Ackerman’s main arguments have been very well summarized by Stuart Russell, Max Tegmark and Toby Walsh in their response to his paper, published in the same year:

“(1) Banning a weapons system is unlikely to succeed, so let’s not try. (2) Banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil. (3) The real question that we should be asking is this: Could autonomous armed robots perform more ethically than armed humans in combat? (4) What we really need, then, is a way of making autonomous armed robots ethical.”

Admittedly, human, rather than technology itself, is truly the one to blame when some technology is used for evil. But Ackerman might have failed to understand that the AI and robotics researchers are not trying to ban the technology itself, but they just want to prevent a global arm race of AI weapons before it starts. Think about biological weapons, chemists and biologists are pushing the boundary of technology further each day, and the world community is certainly supportive of that. However, we have rather successfully banned biological weapons, because they are notoriously dangerous and unethical. The same applies to autonomous weapons: AI is a fascinating field, where tons of great opportunities arises, but autonomous weapons as a subfield would not be beneficial for our world.

In addition, regarding to Ackerman’s proposition of making autonomous weapons ethical, Russell, Tegmark and Walsh have made an excellent counterargument: “how would it be easier to enforce that enemy autonomous weapons are 100 percent ethical than to enforce that they are not produced in the first place?” From what I know about artificial intelligence, no matter how “intelligent” they are, the AIs still have to follow the logic designed and enforced by human programmers. Therefore, if Ackerman wishes to make autonomous weapons ethical, he will have to make sure that no AI designers meddle with the logic and turn their robot into a cold blooded killing machine. Is that an easier task to do than simply banning all autonomous weapons? I can hardly say yes.

When I examine this reasoned debate, I definitely believe than banning autonomous weapons is an urgent and important task. As someone who wants to work with machine learning and artificial intelligence in the future, I deeply agree with this line from the original letter: “most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so.” — Yuyan

29 thoughts on “Examining a “Reasoned Debate About Armed Autonomous Systems”

  1. Like Yuyan, I take issue with a lot of what Evan Ackerman believes about the usefulness of autonomous weapons. His argument is very reliant on a “reasoned debate” about these autonomous machines, yet he seems to ignore reason in some instances.

    For example, Ackerman makes a point that technology should not be ignored due to possible negative consequences, i.e. you should not “cover your eyes and start screaming “STOP!!!” if you see something sinister on the horizon.” To do this, according to Ackerman, would be to ignore “so much simultaneous potential for positive progress,” in the field of… autonomous weapons?

    If the very thing that we regard as sinister is the invention of autonomous weapons, I do not see reason to pay attention to the potential for great leaps in the development of such weapons. If Ackerman means the potential for development of A.I. in general, then I believe there are better ways to go about furthering our understanding of this technology besides building weapons out of it.

    Obviously, we have made the Ackerman mistake before: the nuclear arms race was the result of progress through the development of weapons. And presently, that technology is almost universally regarded (by the scientific community) as harmful to the prolonged well-being of humanity.

    I think It is valuable to compare these two technologies, because both have the potential to do a massive amount of harm. It is reasonable to believe that maliciously programmed robots could commit atrocities on the level of genocides. Yuyan mentioned this, and I think this is really reason enough to ban any further development.

    The term “killer robot” seems to be a controversial one. Although it may be used to evoke an emotional response from the less educated, I am not sure that this is always a bad thing. A bigger mistake than that phrase would be if the public were presented a view of A.I. weapons that did not convey the potential killing power of these robots. Because that’s what they are: robots that kill, without any real ethical judgement at this point.

  2. I would argue on the side of Evan Ackerman. While I think he presents some of his points in an unproductive way (Sam rightly took issue with this), his overall viewpoint on the matter should be taken seriously.

    While automated weaponry seems to make many cringe, some have similarly sweepingly condemned automated cars. This is why rare car crashes involving self-driving vehicles so often make (national!) headlines. And this is not entirely unfair – if I am going to be hurt in a car crash, I’d probably prefer that I made a mistake and paid for it, instead of a car malfunctioning and causing me injury.

    But automated cars can do things that humans just can’t. Check out this video of a Tesla braking and saving its passenger, having predicted a possible accident two cars ahead:

    http://mashable.com/2016/12/27/tesla-predicts-crash-ahead-video/#U9e_bEBv6Oqm

    Similarly, automated weapons do things that humans can’t. After seven years of the War on Terror, forty-five had lost their lives to friendly fire. Humans can’t guarantee their personal weapons, drones, or missiles won’t operate if they accidentally aim at an ally. Collateral damage could be reduced by more intelligent targeting and firing systems.

    Ultimately, automated weapons are an extension of human warfare, and are still under the control of humans in their design (much unlike biological weaponry, to which they have been unfairly compared). Banning these weapons may be easier than making sure that they are coded ethically, however humanity may lose out on some of the many positive solutions to problems that they have been complaining about in warfare for many years (in a parallel, imagine how much traffic would decrease if cars were able to start moving right when a light turns green, or when the car in front starts accelerating, rather than relying on an easily distracted driver).

  3. I agree with Yuyan and Sam in taking issue with Ackerman’s argument against the banning of autonomous weapons. One particular point of interest to me was Ackerman’s point that the development of autonomous armed robots may be viewed as an ethical obligation, should they be found capable of reducing noncombatant casualties. Ackerman argues not for a ban on these types of weapons, but rather a way to make autonomous armed robots “ethical” – enhancing their capability to identify hostile enemy combatants and situations where force may be employed under the existing Rules of Engagement.

    I hold issue with Ackerman’s argument and assumption that it is possible for artificial intelligence to be “ethical,” as AI systems are not moral agents capable of making ethically charged decisions. AI systems are – as Russell, Tegmark, and Walsh elaborate upon in their response to Ackerman – “incapable of exercising the required judgement.” While the three authors entertain the possibility of this technology developing in the future, Russell, Tegmark, and Walsh discuss the fact that an AI system capable of making judgements would be resting upon “killing opportunities” occurring in identical settings in the past. I would argue, however, that no amount of programming can produce the capability of ethical decision-making in an AI system. The issue of ethics in war and the Rules of Engagement goes beyond the settings of “killing opportunities.” There is an inherent gray zone in which no robot can navigate and make decisions.

  4. While I am more open to the possibility of automated cars as John E mentioned, automated weapons does much more harm than good. Besides the obvious worry of being struggling to control or place moral limitations on these automated weapons, I find it a hollow argument that automated weapons would really make our armed forces more effective in modern warfare.

    Many people have chastised the U.S. government for decades on its tendency to resort to bombing and now drone strikes to win wars, claiming that this is very inefficient and does nothing but turn the local people – who are not even associated against the aggressor – against the Americans. Many of these same groups propose that the best way to fight modern warfare, especially guerrilla warfare, is not by distance bombs and machinery but by personally engaging with and empowering the local people to win the fight on a grassroots level.

    If we really believe that the U.S. should limit its conventional methods of bombing to win modern conflicts, then it does not seem like automated weapons would be at all important. They clearly would not be able to communicate better with the local people and really win over allies against nearby terrorist groups, and they clearly send the wrong message in terms of how the U.S. aims to provide security and stability in a region after a conflict. In this way, robot fighting machines would seem to be a step backward in warfare rather than a progressive step forward.

  5. I agree with Yuyan and the Russel article that Ackerman’s dismissal of any attempt to regulate the development and proliferation of autonomous weapons is flawed. However, I am intrigued by the potential benefits associated with autonomous weapons and am thus not entirely convinced that a complete ban is appropriate at this time. Ackerman asks, “If autonomous armed robots really do have at least the potential reduce casualties, aren’t we then ethically obligated to develop them?” If autonomous armed robots can better follow the Rules of Engagement, remain unemotional in combat, and make more accurate judgements about when to use force, it is possible that these robots may operate in a more predictable, ethical, and safer manner than human soldiers. If parties on both sides of a conflict** utilize these weapons, there may also be enormous potential for lessening human casualties.

    Perhaps a treaty is needed to negotiate a ban on armed autonomous weapon production and proliferation for a certain number of years (e.g., 10 years). This initial ban may be justified in light of the need to understand the potential for autonomous weapons to reduce casualties before more definitive decisions can be made about their use. After significant research and development has been pursued, international actors may return to the question of whether these autonomous weapons have a purpose for good in modern warfare, or if they deserve to be banned entirely.

    (**Note: this is another point to be considered—will the kind of military capacity enhanced by these weapons favor more developed/wealthier states? Is it reasonable to foresee conflicts where states of all levels of development have autonomous weapon capabilities?)

  6. I agree with many of the points that my classmates have brought up. One thing that has not been extensively discussed (probably because it goes well beyond the scope of this class) is the meaning of war removed from a human context. In dealing with guerrilla/insurgency warfare this may not be applicable, but one could imagine that if a war broke out between two actual nation states, the deployed armies could be entirely made up of robots. Of course the benefits would be clear – no one (except for collateral damage) would be injured or killed. At the same time, an objective is also unclear. If a battle took place on a battlefield (as opposed to around a besieged city) the goal would presumably be to destroy all of the opposing sides’ robots. But viewed more abstractly, this is really just a contest of engineering skill and capital. There is no ideology, no call of duty, just basic economics.

    This would be the case if the robots were made “ethical,” as Ackerman suggests. This would mean that the robots would not target civilians, food sources, hospitals, etc. The same criticism about reducing warfare to economics can be made about the atomic bomb – with the bomb, power is measured in the kilograms of plutonium and range of missiles, which are acquired with purchasing and engineering power. I struggle to come up with legitimate reasons for territorial wars, but do not have any illusions about a world without war. That being said, while countries should be able to protect themselves, we should use this technology as a lens through which we examine our society and question why we engage in violence.

  7. I find this issue so interesting, and I did not know much about it before reading the articles. After reading the articles and responses from classmates, I cannot say that I have a strong opinion either way. I definitely see some holes in Ackerman’s argument, but I am also not as inclined to dismiss it completely as Yuyan and some of the others suggest.

    I particularly want to point out Ackerman’s argument about how banning the technology is unlikely to succeed. I think the point is that, while the United States may not be interested in developing such a technology, what happens if an aggressor such as North Korea or even China develops it? Warfare would become so uneven and the accuracy/efficiency coupled with the diminished risk would make any kind of human defense extremely difficult. I do also agree though that having these systems potentially malfunction is also very dangerous, and a decreased risk of going to war could leave to many more wars and greatly increase the possibility of reckless/unintended destruction.

    I agree with Allison that there are great possibilities for this technology as well if it removes the human component of war. Instead of killing humans, robots would simply battle. But that does lead to questions of war and strategy, and is there even a possible endgame in that way?

    In the end, I tend to say that we should stay away from these “killer robots,” but I am not convinced that we should totally dismiss any arguments otherwise or discount the technology altogether. I feel as though there is potential in a greater use of robots – even if only guided by humans – to lessen the severity and devastating effects that war has on families. Drones and other devices have already been extremely effective in this way. While a transformation in warfare may seem daunting, it is also not unheard of. And, ultimately, anything that could potentially save human life deserves a second look.

  8. Practically speaking, whether or not AI and Robotics researchers sign a letter to ban “killer robots” will not deter capable nations from developing these weapons. It is in the interest of technologically advanced nations to develop this type of weaponry for several reasons. First, it decreases the likelihood of human casualties, which often negatively impact public opinion and therefore military operations. Next, it simply offers a technical advantage over adversaries. Finally, these weapons could be more accurate in destroying identified targets.

    Ultimately, a ban on a specific type of weaponry is akin to treating a symptom without addressing the root cause of the problem. As long as there remain adversarial contests between nations, new types of weaponry will be in high demand.

    The most terrifying part of this advancement is the ability for a state to use this type of autonomous weaponry to systemically wipe out a population. Like the original letter mentions, “killer robots” could be utilized to carry out campaigns of ethnic cleansing. Though Ackerman promotes the ethical use of autonomous weapons, international agreements regulating morality are difficult to achieve. As a result, I’m pretty pessimistic about the proliferation of AI weaponry. I’m not convinced by the argument that it can be universally banned, and I’m equally unconvinced that it can be universally regulated to promote ethical use.

  9. I’m not ready to write off the possibility of reduced collateral damage via the utilization of automated weapons, but I also have to acknowledge that the dissemination of advanced technologies to rogue groups has occurred in the past and could occur once again. These groups lack the technological knowledge to build such weapons themselves, but they have gained access to them through corrupt troops and through the firearms black market. For example, the Taliban was armed with American-made M-16 assault rifles, purchased off of corrupt Afghan troops and the black market, and this more than doubling their firing range after they switched from the Russian-made AK-47. Additionally, a U.S audit revealed that the Pentagon lost track of many of the firearms that they had supplied to the Afghan troops, showing negligent record keeping.

    For more information, check out this article: http://www.newsweek.com/2015/05/29/arming-enemy-afghanistan-332840.html

    Proliferation issues from corruption, black markets, and negligence could all apply to autonomous weaponry. While traditional rogue groups would lack the skill to change the coding of “ethical” autonomous weapons, there are plenty of coders on the black market that would have this capability, and with increased radicalization and recruitment methods taking place internationally, who knows what capabilities ISIS and other militant groups have access to.

    There is a place for autonomous capabilities in the military. I think automated technology could reduce the risk of Explosive Ordinance Disposal units (check out the documentary Bomb Patrol Afghanistan to gain insight into what dangers these troops face.) The problem lies in the fact that technology is expensive and therefore the robots would probably be weaponized in order to protect themselves. Even if they were weaponized with stun guns or rubber bullets to stop terrorists from thwarting their mission, the technology could easily then be transferred to accommodate lethal weapons. Perhaps there is a way to make autonomous technologies protect themselves without a weaponized component, such as advancements in shielding or escape techniques, so that the military can still benefit from them.

  10. As it stands, the US military and other militaries already widely accept and utilize in battle some forms of autonomous weaponry. Weapons systems like sentry guns (widely popularized through video games like Call of Duty) are employed in the Demilitarized Zone in Korea to protect coalition forces on the Southern side of the border from ambushes from the North, adding at the least another set of eyes and at the most a critical layer of protection. In Israel, partly autonomous systems like Iron Dome have already succeeded in saving hundreds of lives, in that case against rockets fired from Palestinian territories by terrorist groups against large population centers. And the US Air Force has used guided smart bombs with autonomous systems to minimize civilian casualties and accomplish precision strikes against militants with intent to harm the United States or innocent non-combatants.

    Proliferation of these weapons is possible, and the US nor its allies should never expect to have a perfect monopoly on these technologies, be it as a result of misplaced, lost or stolen weapons, technologies stolen through military-industrial espionage, or the legal sale of these systems. These weapons can make warfare simultaneously safer and more dangerous by bringing more directed firepower to bear more quickly and more accurately. In a nightmare scenario a malicious actor, or the technology on its own, could turn these systems against innocents, but that much has been clear since we invented weapons of any sort.. The point I think Ackerman is trying to make is that abandoning the pursuit of technological innovation because of the inevitability of weaponization robs us of the overwhelmingly beneficial aspects of innovation—a point I think he’s categorically correct in making. This applies to technologies like GPS, the Internet, and the cell phone, all of which were made possible because of grants for military research that focused on military and weapon applications, but that provide tremendous peaceful value that many agree far outweighs any sin related to their military origins.

    To say that we ought to pursue technologies with weaponization in mind is of course somewhat wrong, or at least involves a value judgment on the part of the developer. However, such efforts to ban them outright on the premise that weapons of all kinds cause harm (as these autonomous ones do), also neglects the relative benefits of autonomous weapon technologies, and of automation in general. Moreover, from a practical view, such a move potentially serves to stymie research that depends on defense applications and related funding, and that could add real value to non-violent, civilian spheres. We ought to be careful in making these value judgments, and understand that while the business of weaponization is not one anyone looks forward to, that some weapons still provide comparative benefits over others. Autonomous weapons systems, in some contexts, stand to do just that.

  11. The Russell-Tegmark-Walsh response point I found most didactic was, “His argument… after the advent of autonomous weapons, the specific killing opportunities—numbers, times, locations, places, circumstances, victims—will be exactly those that would have occurred with human soldiers, had autonomous weapons been banned. This is rather like assuming that cruise missiles will only be used in exactly those settings where spears would have been used in the past.” This biting criticism of Ackerman’s logic reminds me of the grey-white-black ball metaphor from the first weeks of class. Already, all these arguers agree this ball is grey. But the evidence and opinions proclaiming the risk this could be a damning black ball convince me. To believe that autonomous weapons, which Ackerman lauds for their lack of human need for sleep or the slow processes of human judgment, would not inherently expand the theater of war, to me, is laughable. Though a highly debated and politicized subject, gun control provides a useful corollary. America’s gun deaths per capita are astronomical in comparison to our international peers; a statistic that finds easy explanation in the preponderance of firearms throughout American society. Gun bans, limitations, and control measures at local levels have proved useful to the minimization of gun deaths within the target area. Of course, the problem remains that those who truly want to cause destruction at all cost will get the required weapon, ban or no ban. But if limitation of human death is the goal, adding difficulty to acquiring weapons by immoral actors proves useful. Generally, I ask, why open this pandora’s box? With so many experts in the field decrying the advance of this technology based on their assessments of the risk, it seems hubris to advocate policy that contradicts their conclusions.

  12. When deciding to ban “killer robots,” I think we need to think about the technical limitations of developing such technology. Comparisons have been drawn between self-driving cars and armed autonomous systems, but driving and killing are two fundamentally different actions. Driving comes with a set of laws and procedures that makes decision-making relatively black-and-white, at least compared to killing. Training a robot to kill intelligently comes with a whole set of issues associated with human decision-making, which is what we’re trying to avoid in the first place with autonomous systems.

    Many people have brought up the “ethics” of this situation – how can a robot make an ethical decision? Well, via proper machine learning, a robot can learn from human decisions. Then the real problem arises: can we trust humans to make ethical decisions? Is killing ever an ethical action in itself? How can a robot “ethically” do something that is inherently unethical? Is a human making the same decision any more ethical? As Allison mentioned, rather than condemn autonomous systems, we need to re-examine the question of why societies engage in violence in the first place if we want to talk about ethics. Then, armed autonomous systems can be evaluated accordingly.

    I hesitate to say that armed autonomous systems should definitely be banned, for as Collin states, similar technologies like drones have proven to be quite effective in minimizing risk and costs. This is definitely an interesting topic that I haven’t thought about much, and I’m curious to see the direction in which it heads.

  13. I have a particular problem with Ackerman’s 2nd point, summarized by Russel and co. as “(2) Banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil” or as Ackerman put it himself “And that’s the point that I keep coming back to on this: blaming technology for the decisions that we make involving it is at best counterproductive and at worst nonsensical.”

    The problem I have with this point is that it fundamentally denies the value of internationally (or nationally) ‘banning’ a practice, purpose, or development of a certain technology. The purpose of banning a certain technology/weapon is to signal, in this case due to its international nature, to the rest of the world that this practice or weapon will not be tolerated. Oftentimes, as a result of the banning, a certain stigma attaches to the practice and weapon that dramatically increases the cost of using the banned weapon. A very relevant example would be Syria’s most recent use of chemical weapons, which drew, rightfully so, almost unanimous backlash from world leaders and the international community. For example, even though the Syrian Civl War has produced hundreds of thousands of deaths, the famous ‘red line,’ which would theoretically prompt some sort of U.S. intervention, was drawn at the use of chemical weapons. The U.S. (and the world) sat idle and watched Syrian slaughter, but finally intervened after 70 people died from a chemical attack (this is not to downplay the gruesome and horrific effects of chemical weapons, but to merely point out how powerful the stigma attached to chemical weapons is). Another relevant and ongoing example would be the ‘gun culture’ that currently permeates throughout the United States, which Adam alluded to. In this case, there is no stigma attached to the ownership and the use of guns. Instead, the behaviors and attitudes of U.S. civilians are guided by a culture that sees gun ownership and use as normal and even attractive. Critics argue that this culture accounts for the widespread proliferation of guns and gun violence in comparison to the rest of the world.

    Of course, overall, my above point is moot if the value gained from armed autonomous systems exceeds the costs and downsides. However, the main point is that internationally banning ‘killer robots’ can be extremely powerful in preventing future use and deployment for violent and unethical purposes.

  14. Ackerman’s arguments are weak, and I will decompose each point he presents.

    “(1) Banning a weapons system is unlikely to succeed, so let’s not try. (2) Banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil. (3) The real question that we should be asking is this: Could autonomous armed robots perform more ethically than armed humans in combat? (4) What we really need, then, is a way of making autonomous armed robots ethical.”

    (1) is weak since bans on biological and chemical weapon systems have been, for the most part, successful. (2) oversimplifies the issue, stating that human willingness to use technology for evil is the most significant and enduring factor to consider. However, humans have demonstrated willingness to exercise restraint, as they have with biological and chemical weapons. Moreover, human willingness to develop technology for evil is not the only factor to consider when contemplating how to deal with autonomous weapon systems. If these weapons are able to learn and advance, then they pose a danger to humanity as a whole, as humans may lose control over them. Thus, I would imagine that a ban would be agreeable to many, if not all, parties. (3) is weak since autonomous weapons have high systemic risk associated with them, i.e. becoming a danger to all humans. Thus, even if they are able to conduct warfare more ethically, they may eventually disregard their “ethical” nature. This systemic risk will only rise if an arms race for autonomous weapon systems occurs, as countries will pour significant resources into further developing these systems, increasing their capabilities and likelihood of learning how to disobey humans. (4) is thus also discredited since it builds off of (3).

    Overall, I don’t think Ackerman’s arguments merit any attention, as each individual point is rather weak. (3) is strongest point but the benefits of “ethical” warfare conducted by robots is counteracted by the rise in the systemic risk (of these very “killer robot” posing a threat to humanity) that will occur if development of them is not banned. Thus, development of autonomous weapon systems should be banned in order to reduce/eliminate this systemic risk.

  15. The combination of outdated practices and destructive new technologies can lead to unprecedented human suffering; this is especially true in reference to warfare. With IBMs, missile, nuclear weapons, automatic weapons, and more, humans have an unprecedented capability to inflict immense suffering and provoke mutual destruction. I take the potential abuse of these weapons very seriously. However, I also agree with Ackerman, that “blaming technology for the decisions that we make involving it is at best counterproductive and at worst nonsensical.” How do I reconcile these contradicting positions? Well, I argue that we should continue to develop technologies that can be weaponized, as long as they have viable peacetime application. (I.e. we should continue researching nuclear power options without pursing the development of nuclear weapons.)

    The idea of making “autonomous armed robots ethical” implies that there is an ethical form of warfare, a very strong assumption that I am not convinced by. War is inherently terrible. The entire purpose of the practice is to kill and inflict so much pain on your enemy that they conforms to your will. It is an outdated practice, as I mentioned earlier, and when advance technologies are applied, it simply blows the scale of suffering out of proportion. It is perhaps too idealistic to hope for a complete end to traditional wars, but banning the use of extremely destructive seems like a step in the right direction. While I disagree with John’s characterization of Obama’s involvement in Syria, I do agree that international precedents can be important.

    Now that I have established why I would ban autonomous weapons, I would like to highlight an important caveat. If there are peacetime benefits to be gained from developing a technology that can potentially be weaponized, I believe we should continue to explore and develop the technology (in a nonweaponized way) regardless of its potential misuse. Our efforts to preventing catastrophes should focus on limiting state abuse, rather than divesting in research and development.

    I find the parallel that Adam draws to gun control extremely provocative. However, I think the analogy is slightly misapplied. Guns are dangerous and I do believe there has to be increased regulation to keep them out of the hands of the public. However, the mainstream left is not advocating for a complete ban of the technology. Guns are a useful tool for hunting and other activities, thus it would be counterproductive to ban them in every capacity. The outcry for gun control centers around better background checks, more due diligence etc. Its an effort to control the use of this technology, so there are less instances where it shifts from being useful to being dangerous because it was misapplied/used for the wrong reason.

    I think there should be a similar call for reform in the realm of warfare. Missile, atomic weapons, nuclear weapons,etc., they all already exists and we cannot turn back the clock. That being said, we can control how we utilize the technology moving forward. We can use pass international laws that allows autonomous weapons in very limited capacities.

  16. In Ackerman’s article, he argues that we should focus on making automated weapons more ethical rather than trying to ban the weapons. In Russell, Tegmark, and Walsh’s response, they argue that weapons should be banned rather than trying to flirt with technological ethics. Both articles make valid points, however I feel that they both also missing the other’s strongest points. Ackerman is not arguing that bans never work, but that this specific ban on automated weapons may be more challenging than with other weapons we have dealt with in the past. This is because such weapons may have potential value to our society as dual-use civilian products. As a result, he emphasizes that we should focus on preventing the production of specifically unethical automated weapons. Ackerman’s argument is flawed because automated weapons can be compared to chemical and biological weapons, which have mostly been successfully banned, suggesting that a ban is possible and may be the ultimate way to prevent the production of such weapons, as Russell, Tegmark, and Walsh discuss. Nevertheless, I feel that his opinion stands in that outright banning automated weapons may not be allowing the growth of such value. Again, much like biological agents and nuclear power, automated weapons sit on a fine line of being awfully destructive or incredibly useful. If we had completely banned them, we would be eliminating a huge proportion of valuable items and expertise we have today. Despite their value though, it is necessary to implement safeguards against unethical use and development of the technology. Automated weapons are comparable, and it seems like simultaneously working to ban the weapons and creating measures to prevent the unethical production of them is necessary to develop a clearer allowance threshold for such weapons.

  17. While I agree that Yuyan makes some very good points about the flaws behind Ackerman’s argument, I must admit that employing armed autonomous systems in conflict situations may be a productive option for the future. Employing robots in place of humans in severe conflict zones could protect thousands of lives on both sides of a conflict and lower the risks of human error or collateral damage. Additionally, robots can be programmed to engage in limited situations and under clearly defined conditions that prevent it from certain actions that could harm civilians or violate international laws of war. Although these programs can be certainly flawed, they would be flawed because of the man-made rules that are programmed into the AI units, something that is inherently problematic with human behavior and would certainly occur on the battlefield with living soldiers.

    I also agree with Ackerman that banning the technology won’t work since the barriers to entry are just so low. Any nation or even non-state actor could get their hands on automated technologies on the cheap and adjust them to fit their fighting needs. However, placing limits on AI weapons’ operational capacity, like those of other arms reduction treaties, could be considered as a viable option to avoid an AI arms race. It’s in the interests of all states, much like it was in the interests of the US and the Soviet Union when they signed START I, to lower the number of AI weapons in use and how or when they could be used. Indeed, creating a sort of international “rules of the road” agreement for AI weapons, although not perfect, would set a global standard that all nations would be expected to abide by, much like other conventional arms reduction treaties that have been implemented between states. This could also be coupled with conventional verification mechanisms that have been employed by the international community for situations like conventional conflict ceasefires, nuclear proliferation, and chemical weapons development.

  18. I think Peter hits the nail on the head – there’s a lot to be gained from military innovation that emerge as side-products of developing advanced weapons. However, Alison brings up a good point about how the objectives of war become obfuscated when two parties engaging in military conflict choose to use automated weapons. When the two parties engage in conflict, the length of the conflict would actually likely be increased since parties can now sacrifice robot troops before resorting to human troops. So while the usage of ‘killer robots’ can seem like an ethical way of skirting bloodshed at first, it may actually just be an expensive way of achieving similar amounts of bloodshed. If a military finds itself truly budget constrained, then insofar as the cost of labor exceeds the cost of developing new robots robots will never play Moreover, since when was technological innovation necessitated by the military?

    So, I disagree with Ackerman’s argument on multiple fronts: For one, I do not believe that autonomous armed robots can perform more ethically than armed humans in combat and I do not believe that developing a way to make autonomously armed robots ethical to be the solution to the killer robot problem he presents. Instead, I contend (regarding his argument that banning the technology wouldn’t solve any problems since humans would still harbor a willingness to use that technology for evil) that restricting the capacity for humans to act would be an effective way for humans to not be so willing to use killer robots. Unless these robots are truly complex, they cannot make ethical decisions that are not based upon some sort of objective value that is fed into the robot. For example, a robot can differentiate between killing 2 or 3 people in terms of value, but it’s a whole another issue to teach the robot who were open to be killed.

    So what would the ban look like? Like others have suggested, a roll back of nuclear and armed weapons and international seizure may work. But unlike others, I cannot and do not view this similarly to gun control. Guns have many uses. Armed killer robots have one. Go figure. Guns are easy to obtain. Armed killer robots – not really. So other than international cooperation issues, I don’t really see any issues stopping parties from branching together to draft an anti-killer robot bill.

  19. I agree with the letter that “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.” While many others, in this comment section and in the response by Russell, Tegmark and Walsh, have rightly taken issue with Ackerman’s last three points (as summarized by the response), I don’t think enough attention has been devoted to part of his first point, that “banning weapons is unlikely to succeed.” In fact, I think a ban will almost certainly fail in some respects.

    I want to emphasize how easy it is to build an autonomous weapon. I’m somewhat surprised that non-state actors haven’t already developed crude autonomous weapons. It would be fairly easy for someone familiar with AI (not even an expert) and access to standard weapons (such as explosives or guns) to build an autonomous system that used this weapon. See this article/video [1] on a programmer who built a system to shoot squirrels (but not birds!) eating from his birdfeeder with a squirt gun. It’s worth noting that Ackerman’s argument about quadcopters has basically been vindicated as ISIS is now using commercially available drones (some of which are quadcopters) to attack Iraqi forces [2]. Thus it seems to me that it is only a matter of time before similar groups start using autonomous weapons built from peaceful AI components. It’s actually hard to imagine these groups *not* using self driving cars to deliver bombs, once that technology becomes widely available. Arguments that a ban would prevent these that point to “successes” in banning chemical weapons and cluster munitions are comparing apples and oranges. Those weapons are simply *much* harder to develop than crude autonomous weapons.

    So why ban autonomous weapons at all? While crude autonomous weapons will be easy to develop as Ackerman correctly points out, advanced autonomous weapons will (probably) not be easy to develop. Thus a ban may be successful at preventing the development of game changing autonomous weapons that have the potential to be “weapons of mass destruction” as the response article imagines may be possible. While car bombs that are delivered by a self-driving car rather than a suicide bomber are obviously a negative development, they don’t merit the same kind of alarm that something like a completely autonomous robot bomber plane, or a Terminator, does. Thus I think it is worth specifying what kind of weapons the ban intends to prevent, when advocating for its introduction.

    1. https://gizmodo.com/5896702/squirrel-tracking-water-cannon-is-a-triumph-of-the-nerds
    2. http://www.defenseone.com/technology/2017/01/drones-isis/134542/

  20. There seems to be a false dichotomy drawn between states’ interests and humanity’s interests in the arguments. States are seen as the enemy of the people in the argument of Russell, Tegmark, and Walsh in that they will pursue their own goals of establishing military dominance at the potential risk of catastrophe later, while in Ackerman’s case states the same thing only in arguing that these risks be mitigated. In a sense, both accept the fact that the development of weapon systems that are controlled by AI are huge threats to humanity; they see the problem as being the divergence of state interests and the risks to humanity as a whole.

    Instead, it might be beneficial to look at how states have responded to catastrophic threats in the past: namely, nuclear weapons and chemical weapons. States recognized the dangers of proliferation and created the Non-Proliferation Treaty, yet it has been broken again and again when state’s interests lined up against it. Comparatively, the ban on chemical weapons has been largely successful, but they are still deployed as was recently seen in Syria. For nuclear weapons, states with the capacity to cause global nuclear armageddon soon came to the realization that mutually assured destruction made the weapons systems they held to dangerous to be used in actuality. This has prevented the use of the nuclear weapons, even in a limited capacity. Instead, states had to pursue their goals only in other manners short of direct war and through proxies. For chemical weapons, states with the capacity to use them have used them only in rare instances. So the question here is, how can we ensure that states’ interests align with the limitation of these weapons?

    The answer will still lie with an international regime of control. No one will be naive as to expect this to mean that states won’t develop the technologies to use automated systems, but rather the point will be to establish a means of creating safeguards from allowing these systems access to the types of technologies and weapons that can cause global catastrophe. Thus, the point of establishing the regime would not be to limit the weapons systems controlled by AI but to limit the extent of those systems so as to prevent the unacceptable level of risk for humanity.

  21. I agree with Russell, Tegmark, and Walsh’s stance that banning a weapons system would be unlikely to succeed. Despite international frameworks and conventions against the production and usage of chemical and biological weapons, small, independent actors and even states have clandestinely produced and eventually used them. Yet, there seems to be a much stronger consensus on the ethics, or lack thereof, of chemical and biological weapons than autonomous weapons by most states and the general public. Part of controversy may stem from a current lack of understanding of how these weapon systems would be used and monitored. Chemical and biological weapons have no “dual nature” because we do not consider them a potential tool for peacekeeping or other beneficial acts. In contrast, the autonomous weapon potential appeals to state and non-state actors who would like to minimize their own human casualties while achieving their goals. However, they are wary of the proliferation of this technology once it is developed. Only until it falls into the “wrong hands” will the technology be seen as used with ill intent.

  22. Ethics in combat are subjective, and ergo, if a human programmer codes AI at all, automatic autonomous weapons are also subjective (ergo not more ethical). Therefore, I completely disagree with Ackerman’s argument. I think that there is always a willingness by humans to use technology for evil (the last 50 years in weaponry really highlight that), but a hypothetical ban treaty would at least make it harder to enter into this new frightening form of technology. And lastly, the idea that a weapons ban is unlikely to succeed so we shouldn’t even try is as ludicrous as it is defeated. Our current weapons ban treaties are so hard to implement because they are retroactive- we used nuclear weapons in WWII, chemical in WWI/WWII, and biological in global events, so the technology is already out there. Rolling back technological growth is much harder than creating a barrier to entry.

    What would a hypothetical “killer robot” ban look like? Perhaps in the form of an agency like the IAEA, or with voluntary submissions, or with the ban of certain key technology. I think that there is general consensus (as demonstrated by over 2500 scientists signing the manifesto against automatic autonomous weapons) that this needs to be done before something major happens that starts an arms race, so international governments need to be proactive on this front. There will always be issues with rogue states and actors, but if general consensus is there, then we need to act upon it before countries develop a monopoly on this technology thereby implementing global imbalance in another technological field.

    If automatic autonomous weapons were to be instituted into combat, it would forever change the rules of warfare. As of right now (although they are not always followed), the four Geneva Conventions govern the rights of combatants and non-combatants. But, as Bostrom discussed, AI would not be able to comprehend a social construct like the law, and would therefore not be able to follow these guidelines. Could one just code them as “preferences”? I have a hard time believing that that is possible. Would that then mean that any actor within range is considered a combatant by these weapons? I am fearful that there would be very little distinction, and retroactive punishment would be almost impossible as nation-states can simply claim that the AI went “rogue”.

  23. The issue of the use of automated weapons systems is a complicated one, but I do agree with Yuyan’s criticism of the points raised by Ackerman in his piece. The main issue, as mentioned above, is the fact that these weapons systems are designed by humans, and there is no guarantee that the designers will be ethical in all cases. No amount of regulations or international treaties can guarantee that terrorists won’t be able to create or reprogram these systems to harm civilians or carry out other unethical operations, and it would certainly be easier to simply ban their development in the first place.

    Of course, a potential counterargument is that all technology is developed by and, more importantly, used by humans, who have the potential to misuse it for unethical purposes. This doesn’t mean that we should ban any technological progress. While this is generally true, I would argue that the potential for destruction that automated weapons systems represent means that we should be much more careful with them than we would be with most (although not all) technology. We are not talking about just AI or computer systems here – we are talking about weapons, first and foremost, and like all other kinds of weapons, we need to be extremely careful in our approach to their development. In hindsight, it would probably have been better to ban nuclear weapons before their numbered in the thousands, but we didn’t, hence the anxieties surrounding the potential nuclear war. I would say that automated weapons systems represent a similar development, if not in scale, then certainly in essence.

    Russell, Tegmark and Walsh make an excellent point about the difficulty of enforcing regulations regarding the ethics of the weapons systems. I do agree that banning these systems is vital, since their existence would alter the rules of engagement and likely make wars more prevalent. When we think about ‘killer robots’, they are far more expendable than human soldiers, and moving warfare towards automated systems would mean that the human costs of a war – a major deterrent – would be significantly lower, therefore making it less costly for states to initiate and engage in warfare. The potential loss of resources is nothing compared to the potential loss of thousands of lives – something that will be less likely with robots fighting out wars – and might therefore make wars a more attractive option for dealing with other states. This is another potential danger of automation of warfare that should be taken into account.

  24. Yuyuan brings up an important counterargument to Ackerman’s proposal of making autonomous weapons ethical with Russell, Tegmark and Walsh’s point: “how would it be easier to enforce that enemy autonomous weapons are 100 percent ethical than to enforce that they are not produced in the first place?”

    Perhaps what this alludes to is that AI, even if they are to eventually “replace” humans once they surpass our cognitive abilities, will always be a product of human creations. In that sense, the machines will always have traces of its creator. And if we cannot even hold other human beings ethically accountable, how are we to ensure that these robots–which are a progeny of a species still morally flawed on multiple levels–are not manipulated in the process of its creation to reflect the poor morals of its creator?

    As Bostrom notes, when creating artificial intelligence, we “engineer their motivation systems so that their preferences will coincide with ours.” If these robots, as Bostrom posits, have the potential to exceed our speed, performance, and complex/abstract thinking, then these robots have the potential to vastly exceed our capabilities for and comprehension of “evil” as well. If its creator is a creator with malicious intent, imagine the destructive potential for a robot with speed, collective, and quality superintellignece programmed to carry out those motives.

    Given these complexities, as well as Mary Helen’s point about the gray areas that autonomous weaponized robots may not be capable of successfully navigating around, it appears that the best method of preserving any sort of ethicality is to, as the 2500 AI scientists have agreed to, uphold a ban against an AI arms race.

  25. After spending a lot of time in this class studying nuclear technology and nuclear weapon proliferation, it seems that the best course of action is to institute a ban on AI weaponry development.

    It is important to establish a ban before an AI arms race occurs, as it is much harder to control, monitor, and reverse proliferation once it has occurred. When the nuclear arms race occurred, countries ultimately became less secure. The nuclear weaponry systems were vulnerable to sabotage, accidents, and political winds. While the ability to harness nuclear energy is providing tangible benefits to society, the uncontrolled development of nuclear weapons created national and international security challenges that we continue to face, such as the theft of fissile material. Likewise, AI weapons may provide greater security by gradually removing the human component from war, but without restrictions and oversight, national AI weapons systems can provide a similar existential threat to society. Ackerman argues that we must, as a society, debate the pros and cons of AI weapons technology to determine how best to use it and then determine whether restrictions are necessary. I think that we cannot have a reasonable debate unless there is an initial ban and control of AI weapons development. Otherwise, countries will develop AI weapons at an uncontrollable pace, and could enter into an arms race before there has been sufficient reflection on the benefits and dangers of AI technology. Additionally, AI weapons systems must be securely managed, as they can be hacked and sabotaged in ways that conventional weapons and fighters cannot. Implementing restrictions after the technology has been developed and deployed would be counter-productive and much more challenging.

    Additionally, the ethical argument is very compelling. Ackerman makes a strong point that the technology is never at fault, and that it is actually how people choose to use technology that creates problems. However, his logic does not accept the possibility that the mere presence of a technology encourages the behaviors that society would like to avoid. Ackerman mentions ballistic and cruise missiles in order to provide an example of how technology has already lowered the bar for conflict and military engagement, but he sees this as a weak excuse for outright banning AI weapons systems. If a society’s ultimate aim is to avoid war and/or the harmful consequences of war, the development of technologies that make war even easier will undermine this objective. And aside from soldier casualties, war creates civilian deaths, refugees, infrastructure damage, and economic devastation. Accepting AI weapons development that avoids the first problem, but perpetuates the latter problems, does not seem like an ethical solution to the brutality of war.

  26. I generally agree with Yuyan. While Ackerman has a point in saying that the root problem is the willingness of humans to use technology for evil, this definitely does not mean that there is no merit in making it more difficult for humans to do so. Take the example of firearm ownership. Any instance of using a gun to harm innocent people is mostly the fault of the shooter rather than the existence of firearms and manufacturers. However, this doesn’t mean that we should allow anyone and potentially everyone to own firearms, as this would vastly increase the frequency of such incidents. By limiting the availability of this potentially abusable technology, we reduce the risk of having it be used maliciously.

    A similar argument can be applied to armed autonomous systems. A determined and resourceful attacker can certainly modify existing non-weapon AI software to produce their own armed autonomous systems. However, enforcing a lack of readily available such systems means that there is a higher barrier to such an event than if such weapons were already available to be used and/or stolen. While it is true that banning this technology does not solve the willingness of humans to use technology for evil, we can certainly reduce the potential negative effects of this technology by banning it.

  27. To add further depth to the discussion, I would like to also point out that a simple ban on the usage of lethal autonomous weapons would most likely not be enough to singlehandedly prevent the production and proliferation of such weapons. As both sides of the debate mentioned, there are compelling incentives for states of the international system to independently conduct research into the automatization of lethal weaponry. This is particularly the case since, in comparison to nuclear technology, there are lower technological and material barriers to their production and use and they are easier to conceal. Just as the Chemical Weapons Convention did not prevent countries like Syria from eradicating their stockpiles of chemical weapons, treaties that call for the banning of LAWs cannot reasonably be expected to be the sole solution to the problem.

    Furthermore, while Russel, Tegmark, and Walsh do recognize that negotiating and implementing a ban would be difficult, it seems they grossly underestimate how difficult and perhaps unrealistic such a process might truly be. Only nineteen nations are calling for the ban, while some preeminent nations, such as the US, have actively adopted an official policy governing the development and use of autonomous weapons. The Department of Defense Directive 3000.09, issued in 2012, is largely centered around this principle: “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force”. What constitutes an “appropriate level” of human judgment is not defined, but the written policy nevertheless places the US within the contingent of states that favor retaining human control over weapons systems. It should be noted however, that the directive is five years old as of 2017, which means that it must be renewed and/or updated this year, or expire.

    Again, while Russel, Tegmark, and Walsh do not claim that a ban on the usage of autonomous weapons would be an end-all answer to the proliferation of such technologies, they also do not state explicitly that a multilateral approach is far more necessary. The ban may be an important first step, but would not be sufficient alone.

  28. I agree with Yuyan. Ackerman appears to have several faulty arguments and analogies in his article. First, it seems to me to be a very big gamble to assume that we can pre-program offensive autonomous AI to be ethical and follow pre-determined modes of acting. Not only would this be difficult to monitor on our side, but also how could we expect to hold our enemies to this standard? Moreover, what ethical standard would these robots be held to? We cannot assume that our conception of morality is universal, and it will be embraced by all of our adversaries. Second, Ackerman makes a faulty comparison between autonomous cars and offensive autonomous robots. Ackerman misses the key difference between the two technologies; namely, offensive robots are designed specifically to inflict harm, while autonomous cars are designed specifically to reduce harm. Third, Ackerman takes the prospect of an offensive AI “screw up” very nonchalantly and does not seem to consider the disastrous consequences that could come as a result. Not the least of which, as mentioned in the open letter, is the potential for a public backlash against AI in general that eliminates its potential for positive developments.
    I believe it is the U.S.’s responsibility as the major world power to avoid leading and pushing ahead with a global AI arms race that could have disastrous consequences. Ackerman seems to miss the fact that there are and will always be evil people in the world willing to use technology for bad reasons. This evil is what we cannot ban or change. All we can do is make it much more difficult for people like this to obtain and use the kind of technology that will make their attacks easier.

  29. I agree with Coy and Yuyan in their analysis of Ackerman’s article. The need for ethical limitations on AI is essential for preventing what we consider to be war crimes enacted by offensive autonomous AI. However, while there are some predetermined sets of war crimes limitations, these are apt to change as they did in the post World War Two period, when the Geneva convention was established. Additionally, different states have different interpretations of acceptable behaviors from soldiers.

    However, I think the real risk to the development of Autonomous offensive robots is inequality of development. The fear expressed in the Open Letter from AI and Robotics Researchers is that an arms race will develop to create more and more effective murderous robots, which will eventually fall into the wrong hands. However, the upside to an arms race as opposed to black market development is that which was stated in the Open Letter. There would be fewer casualties if both sides were armed with offensive AI, because even in mixed AI-Human troops there would be fewer human lives at risk. Furthermore, if one side were to develop AI while the other was unable to, then there would be a huge inequity in casualties and therefore a large advantage and motivation for one side to engage in warfare. If everyone had robot fighting machines, it could either guarantee mutually assured destruction or remove the additional incentive to engage because there is no guarantee of success. Adam uses one argument from the gun control battle above, but I would like to raise another point from the gun control battle. People will acquire guns on the black market, so wouldn’t it be better for the good guys to be armed too?

Leave a Reply