On Autonomous Weapons

Our readings on autonomous weapons featured some very direct back and forth on the idea of banning “killer robots.” I think the issue can be split into three broad categories, focusing on the ethics of the development and use of autonomous weapons, the issues they face in international law, and the practicality of their use and prohibition.

Ethics. Gubrud raises the idea that it is contrary to the principles of a shared humanity to allow machines to determine an end to human lives. There is some value in humans making the decision the decision to kill. Opponents to this idea believe that humans killing other humans is no more ethical than robots killing humans, and that the substantive question in this issue relates to matters of practicality. Is it more ethical for a human to be the decisionmaker, and if so, is it enough reason to oppose the development of these weapons?

International Law. Gubrud also presents the argument that autonomous weapons should already be illegal under international law. He argues that robots cannot satisfy the principles of distinction and proportionality which determine just conduct in war; AI can neither reliably distinguish combatants from noncombatants nor weigh collateral damage against military gain. Ackerman opposes this view in his article, claiming that the codified Rules of Engagement are something that an AI can certainly understand and base decisions upon; Gubrud mentions the US’s “collateral damage estimation methodology”, which could serve as a base for a robot to determine proportionality. Neither side claims that the data-gathering and decision-making abilities of the technology is adequate to meet legal requirements yet; in your opinion, will it ever be? What advantages would robots have in this regard, and what challenges would you anticipate for those working on this technology?

On a different note legally, Gubrud also brings up the Martens Clause, supporting the idea that the strong public consensus against autonomous weapons can also determine the standing of autonomous weapons in international law. What role should public opinion play in this legal question, and what should be considered along with public opinion?

Practicality. There are a number of issues related to the practical implications of the development or ban of autonomous weapons.

First, would a ban even be effective? Gubrud points to an already developing international consensus for caution with the technology as a sign that a ban could develop and work, and he, Russell, Tegmark, and Walsh point to successes in banning other types of weapons. Ackerman counters by claiming that robots offer too much of a technological advantage for a state to resist and that the technology is too accessible, even to regular citizens, to effectively control. Trying to ban the tech would be a waste of effort better devoted to preventing abuse. We’ve studied weapons bans as they relate to nuclear, chemical, and biological weapons; is the issue of controlling autonomous weapons fundamentally different? What effects would a ban have on the use of robots for domestic suppression? Terrorism? Are there alternate means to prevent abuses?

Another aspect to consider will be the effect on international stability. With no emotional attachment to these robots, and little political cost for their loss, will they lead to riskier, more aggressive, and more frequent military actions? What are the prospects for an arms race featuring dozens of countries, similar to the broad interest and investment in drone technology today?

What will be the effects on consumer technology? The open letter opposing the development of autonomous weapons argues that public backlash against killer robots will hurt support for the entire fields of robotics and AI. Ackerman alludes to the idea that military research is a key driver of progress in consumer technology.

Finally, is there any aspect of the debate that these authors failed to address? — Trevor

14 thoughts on “On Autonomous Weapons

  1. Thanks for the post, Trevor!
    Autonomous weapons are a fascinating debate. On one hand, they offer an effective means to limit human casualties in war at a highly efficient speeds and productivity. Yet as many critics point out, these AI’s could have serious implications by lacking the ability to line up their calculations with human emotions, which would have the potential to cause greater harm to civilian populations and be an ample tool for repression. Both sides raise convincing arguments, so the question still remains on whether or not to allow these weapons to become fully developed. As the future of warfare continues to progress towards gaining technological advantages on the battlefield, AI’s could fully transform the face of modern warfare. In my opinion, banning these weapons would be difficult. As Ackerman points out, the technology is substantially cheaper than nuclear materials and can be easily developed by non-state factors. More so, these non-state factors would most likely not be influenced by international humanitarian law so civil pressure based on the principle of shared humanity would be ineffective for fully banning the future use of such weapons. Because of this, there needs to be limitation on the international level, but not full scale banning. The focus then would need to be more on the defensive, as I see the use of autonomous weapons by non-state factors as inevitable. Another reason for limiting their use, is based upon the argument that autonomous weapons would allow for more aggressive and provocative engagements. I agree that the wide scale obtaining of such weapons would result in more careless engagement, as risking human lives is often an effective deterrent to entering conflicts. If there is international pressure to abstain from the full usage of these weapons- similar to what we have for nuclear weapons- then countries would not be as willing to go to war, again similar to our current system. Instead, there needs to be a medium between robotic and human engagement in future wars, allowing us to have full responsibility for the robots actions while using their abilities to our advantage. Because there is such an advantage in using this technology in war, countries will continue to develop them, but making sure there is an international code of conduct and international pressure, may help limit the negative impact of using fully autonomous weapons, while also preparing us to deal with the non-state factors which will obtain them under a ban or no ban. This is where Gubrud’s argument on shared humanity will be most beneficial. Civil pressure can go a long way in limiting their use on the international level, but I do not foresee states willfully banning these weapons.

  2. While it seems reassuring to make an argument opposing Gruber along the lines of agency; that is, autonomous weapons can only do as much harm as humans program them to do, I find this line of thinking highly problematic. In fact, as we have seen, implementing artificial intelligence in a much more simple environment, like the stock market, can still have disastrous effects. One drawback of highly functional AI is that machines that attempt to learn from feedback can often completely misinterpret the situation at hand. Bostrom discusses the 2010 Flash Crash, when algorithmic trading machines mistook high levels of volume for liquidity and high confidence in price stability, instead of what it really was, which was panic trading. Such effects can cascade and cause a machine to make exponentially more mistakes than its programmer could have anticipated. Robots should not be weaponized and let loose autonomously unless it can be verified that the programming logic used is flawless and completely foolproof from making incorrect implications, which, at the current time, is an impossibly high standard. The only other option would be to directly limit the maximum damage the robot could commit, in the case that was what it “wanted” to do. For example, programmers could provide strict limitations on how much ammunition the robot could use before being forced to get verified for battle again by a human. Otherwise, without some type of fail-safe, there is no telling how much human life could be lost due to a few lines of poor programming.

  3. Trevor, thanking for leading this discussion with heavily thought-out questions. For some of these questions to be answered, such as the possibility of an effective ban on autonomous weapons and if it is more ethical for a human to be the decision maker over an enemy’s life or death, the pros and cons of autonomous weapons must be weighed. The consensus in favor of using autonomous weapons is that they will spare the lives of thousands of human soldiers on the user party’s side, which is objectively a good thing. There is almost certainly agreement that the fewer deaths of humans there are, the better. However, these protection of human life only accounts for the lives on one side of the fray. A point that Trevor brings up is the likelihood of far riskier and more frequent military engagements with autonomous weapons, which could inevitably lead to countless civilian deaths, especially considering that even human-controlled weapons, such as drones, have caused the deaths of non-targets or mistaken targets (the three peasants thought to be Osama bin Laden and his associates, for example (Gusterson 15)). Considering that even humans are prone to make targeting errors, even with the improved surveillance technology since 2001, I question the long-term reliability of autonomous weapons that are programmed by humans. I also agree that an AI’s incapability of making the same judgments as humans is dangerous. Creating a robot identical to the best military tactician or soldier is a feat far beyond our years and resources.

    Despite acknowledging the common flaws of human judgment, I still think it is more ethical that humans be the decision makers over the life and death of an enemy with the main justification that humans can consider the short- and long-term consequences of their actions. Consider the death of Pakistani mother and grandmother Momima Bibi and the testimonies from her family. The emotional responses to drone strikes and deaths of the innocent can be registered by humans and direct a change in strategy and methods. An autonomous robot, by the definition given by Gubrud, is focused on the target and cannot make in-the-moment decisions to engage a target in a different way. Humans have more control and it would be better to continue the use of man-controlled drones then to use uncontrollable AI.

  4. Thanks Trevor for the post! The conflict between national interest and public opinion here is fascinating: as Gubrud demonstrates, the United States seems to have already decided in favor of autonomous weapons (34), but a 2013 poll revealed that “Americans opposed to autonomous weapons outnumbers supporters two to one, in contrast to an equally strong consensus in the United States supporting the use of drones” (Gubrud, 40). In such a context, it follows that, if the U.S. is to continue its research and development of autonomous weapons, the interest of national security must outweigh concerns over its incompatibility with longstanding humanitarian conventions and current accountability frameworks (see Gubrud 35-36). This environment bears some similarity to the international regulatory attitude towards nuclear weapons, albeit with some nuances. Both nuclear weapons and autonomous weapons are extremely powerful (although the pendulum swings much closer to nuclear in terms of capacity for harm; the key is that both contravene the “distinction principle” referenced by Gubrud 35); both could pose significant threats to populations worldwide if they were to get into the wrong hands (see Gubrud 36); both merit at least some type of regulation; and importantly, the world at large seems to be in favor of regulating both kinds of weapons (see Gubrud 34). Given that autonomous weapons seem much more likely to be used (partially because the repercussions of their use are almost negligible compared to those of nuclear weapons, partially because the loss of human life is not at stake, partially because autonomous weapons are suited for smaller tactical missions and automated defense systems – see Gubrud 38-39), an outright autonomous weapons ban seems very, very unlikely. The question of effectiveness is disputable enough to make it even more so. We should focus our efforts on preventing abuse, whether it be through tamper-proof programming (Gubrud 36), a “well-defined, immovable, no-go red line” (Gubrud 34), or some other route.

  5. Trevor, thanks for starting this discussion and delineating it. There are so many contrasting perspectives and things to consider that I am very unsure of what my own stance is. But that is why this is so important. With regards to ethics, I understand and agree with where Gubrud is coming from – human agency is crucial. As Josh mentions, a few lines of programming could wreak havoc and cause unforeseen tragedies in the takings of human life. Nevertheless, I don’t find that to be enough of a reason to oppose the development of the weapons simply because it is no more ethical than robots killing humans. Human life is lost regardless. Thus, we ought to oppose autonomous weapons on the basis of the Martens Clause and International Law. Having deterrence measures in place to prevent the development and deployment of these weapons act as a safeguard given that autonomous weapons may blur and overstep the distinctions and proportionality given the relative “simplicity” and rule-based procedures from which they operate. However, Robots would be able to justify their actions based on whether the proportionality is needed based on a series of metrics or based on the distinctions made between combatants and noncombatants and do so more judiciously given the lack of human error (see the Drone Strike on the MSF Hospital). Given that the public is at stake – public opinion in my opinion is absolutely essential to the formation of new legal procedures and rules governing autonomous weapons. Banning them would lay down a strict precedence and having deterrence measures as I mentioned above coupled with strong international norms would serve as a good starting point. But ultimately, it rests on incentives. States and Non-State Actors ought to consider the risks of autonomous weapons rearing their head. Arms races and aggression I think will increase should autonomous weapons be introduced in the battlefield. With regards to consumer technology, as Bostrom noted AI and robotics are making significant progress in other arenas. Until more can be discerned about how autonomous weapons would act, caution is best.

  6. Thanks for the thought provoking post, Trevor!
    Your questions become even more interesting when you take into account what we learned about last week- what would happen if these autonomous weapons got hacked? As we know, everything can be hacked, from a rock to a crockpot. Say the US has deployed a number of autonomous, killing, robots to a conflict. The adversary (assuming that they have the technological know-how) could hack the robot and turn it against US troops, or in an even more extreme example, they could hack it and use it to kill innocent people and then blame it on the US. If this is a factor that is a real concern for the international community, then a ban may in fact work. To have a ban work, all of the actors, both big and small, need to be convinced that the risks are greater then the potential gains of autonomous weapons. I believe that the big actors, especially the US, would be more concerned that if they deployed these weapons that they can be turned against them and innocent people could get hurt. The smaller actors, even individual actors, would be more concerned about the effectiveness of their attack, especially if they are going up against a power with a lot of hacking resources, as their attack can be thwarted by damaging the weapon beyond repair before it is even deployed or by simply telling to attack something else. If the international community can be convinced that these are credible threats then a ban might be easier to put in place.
    Thought?
    Related readings:
    http://www.scientificamerican.com/article/ban-killer-robots-before-they-become-weapons-of-mass-destruction/
    http://motherboard.vice.com/read/when-the-killer-robots-arrive-theyll-get-hacked

  7. Trevor, great post! I’d like to address the question you raise on whether a ban would be effective. I think it’s important to note as Gubrud does that during diplomatic discussions “a good deal of time is apt to be lost in confusion about terms, definitions, and scope” (Gubrud 37). A ban on autonomous weapons at this point would probably not get very far, as the necessary legal definitions for these weapons are not in place to make an effective rule on their ban. Countries like the US would probably continue to do what they’re doing now and evade defining the weapons systems they use as purely autonomous, claiming that their weapons still require human decisionmaking and discretion. To move forward in these diplomatic discussions, these definitions must be clearly set, and I believe the principles the Gubrud puts forward are compelling. Without these clear definitions and terms, it’s much easier for states to get around any hasty framework that would come out of a ban on autonomous weapons.
    I’m hesitant to say that public opinion should play an important role in this process. Gubrud points out that governments already in favor of autonomous weapons “will seek to manage the issue as a public relations problem” (34). Public opinion is a very unstable thing and it changes in response to an enormous number of factors. Relying on the public and even to some extent civil society is insufficient; international organizations and governments must set clear guidelines on how the development and existence of these weapons should continue, but first they need to create some clear definitions.

  8. In regards to policy surrounding the legality and morality of autonomous weapons, since there are no fully autonomous weapons used in modern warfare there isn’t much policy in place. However, in May 2014, the High Contracting Parties of the United Nations Convention on Certain Conventional Weapons had an extensive discussion regarding the moral and legal concerns surrounding autonomous weapons. Yet, many of the discussed issues will not occur on an unknown date in an unknown amount of years. Furthermore, in 2012, the DoD released Directive 3000.09, which “establishes DoD policy and assigns responsibilities for the development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms.” The release of this policy made the United States the first country to have an official statement regarding autonomous weapons. In the directive, the DoD declared, “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” However, the directive must be reissued, cancelled or certified current within 5 years of its publication, or it will expire in the year 2022. Therefore, policy surrounding autonomous and semi-autonomous weapons is due to change in the United States, as well as in countries around the world. Current policy is tough to create, for autonomous weapons are not currently in use during military conflict. However, with the increasing use of semi-autonomous weapons, such as the drone, military organizations and many governments are beginning to realize the need for new, more modern policy surrounding these weapons systems.

    Source: http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf

  9. Thank you for posting these thoughtful questions. I would like to focus on the ethics behind using autonomous weapons (like drones).

    Many commentators view autonomous weapons as being unethical because they make humans become detached from and desensitized to gruesome acts of violence. Gubrud claims that “autonomous weapons present an abstract, unrealized horror, one that some might hope will simply go away” (Gubrud, 34). Gusterson describes “drone operators’ perspective [as] remote and objectifying” (Gusterson, 8-9).

    These arguments oppose autonomous weapons because their usage fosters a certain disrespect (or disregard) for human life. However, if one looks at the reasons why a government might use autonomous weapons in the first place, their usage actually arises from the opposite phenomenon.

    One of the main justifications for using autonomous weapons is that these weapons protect more soldiers from being killed in battle. By using these weapons, the government is affirming that they do respect life and will try to find any opportunity to keep a solider out of harm’s way.

    Obviously, there are reasonable limits to the development of autonomous weapons. However, on a basic level, I think that you cannot consider the ethics of these weapons without also remembering the men and women who- if these weapons had not been utilized- would have had to put themselves in danger.

  10. Trevor, thanks for kicking off such an important discussion with a wide variety of thought-provoking questions! I would like to discuss the efficacy of using international law to limit the proliferation of autonomous weapons. Similar to Mitch, I agree that a ban on these weapons currently will not get very far, but I think this is because of the decision making calculations state policymakers would make in spite of the ban.

    As other have pointed out, one of the big justifications for autonomous weapons is that countries can avoid putting their own soldiers’ lives at risk. Some posters have understandably responded by describing the huge loss of innocent life that can occur in the country being attacked if autonomous weapons fail to appropriately assess a situation on the ground. While I think that is a legitimate argument from an ethical standpoint, it assumes that a policymaker in say, the United States cares as much about the lives of the country being attacked as the lives of American citizens. Unfortunately, I think it is unreasonable to think that a U.S. policymaker, or any domestic policymaker in any country, would weight the lives of non-U.S. citizens higher than its own citizens. At this point, we just do not live in a world in which the majority of people see themselves as belonging to a “global citizenry.” Given that, the policymaker deciding whether to ban autonomous weapons must compare the domestic benefit of saving its own soldiers lives versus the potential international cost of an autonomous weapon attack gone awry. While this is, of course, a major simplification of the factors at play, I think most statesmen would choose to protect their domestic interests, regardless of potential international backlash. For many politicians, explaining to a domestic audience why a soldier had to die in battle would likely be more damaging to their career than international criticism.

    Thus, I do not think that international law would be an effective tool at this time. Some might counter by pointing out the success of the NPT. While the NPT has been very successful at using international norms to limit proliferation, the reasons that treaty has been so successful do not necessarily translate to the discussion about autonomous weapons. For the NPT, the major world powers recognized (and had seen) the devastating effects of nuclear weapons, and their domestic constituencies understood the dangers posed by a world filled with nuclear weapons. Since I believe compliance with a future autonomous weapons ban would need to be driven by domestic considerations, I do not think an international ban would be effective until the citizens of world powers think that the risks caused by the spread of autonomous weapons outweigh the benefits of protecting their fellow citizens by using autonomous weapons. Until that day comes, however, I could see banning autonomous weapons being a hard sell in many domestic settings.

  11. Trevor, thanks for your interesting points on the autonomous weapons article. I think you raised some interesting points on an issue that is sure a matter of debate on many different stages. I think on an ethical standpoint it should not matter who kills who; if we decide that weapons are available for everybody we should stick with it. On the other hand I am a believer of the idea of not legalizing weapons, but that is a different and more complicated argument.

    At the same time, as many already pointed out, international law is not an effective instrument on the national level and policymakers would not necessarily comply with it. I also believe that public opinion will be extremely important in determining the outcome of a decision on a national level, but would not necessarily have the same effect on an international level.

    The effects on consumer technology would be extremely dangerous especially if they fall in the hands on non state actors; the more extremist group get advanced in terms of technology and science, the more is going to be hard to defend ourselves on a local basis. Therefore I believe it should be important for policymakers to limit as much as possible the production of high level technology.

  12. Trevor, thanks for your comments.

    One of the most interesting parts of this discussion I think can be found in investigating further the relationship between the ethics and practicality of autonomous weapons. I think one of the reasons why the ethics portion of autonomous weapons is such a difficult topic is because the ethics of warfare are still very much muddled. With each major war that has happened in the world, new scales of human atrocity and cruelness have been created. Even in the hopes of ending one cruelty, war may very easily espouse another–if we ourselves have yet to fully define “ethical” warfare, how can we define it for an autonomous weapon?

    Which leads into the practicality of delineating the responsibilities and decisions for robots. Not only are the definitions of ethics in warfare constantly changing, one must also consider the fact that simply coding a machine to learn and understand a topic in the same way that humans do is vastly different; even if a computer has greater computing power than the human brain when considering a problem, humans are very much responsible for teaching the different dimensions by which an AI must learn and decide from. How do you teach an AI to choose between 1 life and 100? On a simpler level, how do you make sure that the AI will be programmed to achieve exactly the goal that you’re making? Bostrom wrote in his article “Get Ready for the Big Idea” that one of the most difficult things of programming super intelligence is making sure its goals align with ours; how can we ensure that when we aren’t even exactly sure of the goal of warfare?

    In my opinion, I think the development of autonomous weapons will be inevitable, but a hidden benefit I see in their development is that it will really force policy makers and scientists to consider the weight of one human life when programming the decisions an AI weapon will have to make.

  13. I find the discussion of ethics here to be rather interesting. Morality is a human construct: we do not hold animals to the same standards which we hold ourselves. We are the only ones who judge actions or events to be either good or bad. When it comes to killing, death can only be considered immoral if it is caused by a human hand: the bird that accidentally kills its chick or the lion who intentionally clobbers its cub does not itself have any moral agency, and the killings it perpetrates are only good or bad when considered and assessed by humans.

    We put a great deal of stock in the importance and morality of human decision-making, and an especial importance is stressed when dealing with the decision to kill. We value our own lives highly, and it would thus make sense for us to inherently stress the ethics surrounding the choice to murder another human being.

    “Opponents to the idea [that humanity should not allow machines to take human life] believe that humans killing other humans is no more ethical than robots killing humans,” you write, “and that the substantive question in this issue relates to matters of practicality.” I would argue that, when discussing the ethics of ending a human life, the means by which the human dies are just as important as the reason for the death. A soldier whose job it is to kill the enemy might achieve the same end as a machine told to do the same; indeed, the soldier may do a worse job than the machine would, the soldier might make mistakes, the soldier may kill far more or far fewer than he or she should. But the issue of the killing is this: when that soldier pulls the trigger, he or she is acting on a decision of ethics, and he or she is faced to address the morality of the kill. A machine is not given the same ethical dilemma. The end result may be the same, but it is that dilemma that we humans have been taught to appreciate, to cultivate and to rely on; it is a very human dilemma of moral intention that does not exist in a killing machine. From a philosophical viewpoint, that dilemma seems almost as important as the kill itself.

  14. Thanks for this post Trevor! I think that this idea of “killer robots” is extremely interesting and could hold several consequences for our national security and for the security of the globe.

    I find the ethics discussion extremely intriguing because it’s almost a new field. The question arises if we can apply ethics, especially notions of human ethics, to robots. A part of me thinks that it is not ethical to have a robot to kill another human but another part of me thinks the act is ethical, more effective, and in some ways more humane. Oftentimes the idea of combat is that these acts can only take place on the ground and humans must be involved in direct, in the moment decision making. Literally, someone must be able to fight and negotiate for their lives. In some ways this seems more ethical to me than say a drone which is pretty much a surprise attack. On the other hand, I think sometimes these robots can save human lives because soldiers from one side do not have to engage in physical combat. I still wonder, though, if that is more humane if the other side does not also have these capabilities. I also wonder what happens when these attacks go awry. I’ve seen some pictures of people who were not the intended targets but who were nonetheless killed by drones. I don’t think this is humane at all and I don’t want to write these events off as just a feature of war. After the guest talk last week I also think we have to ask ourselves about what happens in the event that we can no longer control our robots. Are they connected to the internet? Can they be hacked?

Leave a Reply