To Automate or Not to Automate …

The world is increasingly experiencing applications of Artificial Intelligence in new and surprising fields. Notably, usage of AI in weapon systems is currently being researched and developed, triggering a polarizing debate. On the one hand, Evan Ackerman argues in favor of autonomous weapons; on the other, Stuart Russell et al. support banning autonomous weapons instead.

Ackerman begins by presenting an open letter from the 2015 International Joint Conference on Artificial Intelligence, which details potential disadvantages of autonomous weapons. The declaration acknowledges that autonomous weapons are relatively cheap, easy to create, and will soon be in production worldwide, but proceeds to offer criticism, nonetheless. Cautioning against using AI in autonomous weapons, experts warn that a “global AI arms race” is impending and dangerous. Ackerman, however, is unconvinced that banning autonomous weapons will be a successful deterrent against nefarious actors. Furthermore, he compares the skills, judgment, and ethics of “armed autonomous humans” and “armed autonomous robots” to one another, a juxtaposition in which the machine edges the man. Ultimately, Ackerman finds more positive attributes in robots than he does in humans: robots are not prone to the vagaries of emotions or fallibility, and ought to, therefore, operate more safely and with fewer mistakes. In the event the robot does commit an error, machine learning can guarantee that each robot in a hypothetical fleet will never make that error again. Ackerman leaves readers thinking whether these autonomous robots can be as ethical as humans, if not more so.

Russell et al. respond to Ackerman by advocating for an international treaty to limit access to autonomous weapons, avoid a potential AI arms race, and prevent mass production of autonomous weapons. Although any ban, including one for autonomous weapon production, would be challenging to enforce, Russell et al. maintain that it would not be “easier to enforce that enemy autonomous weapons are 100 percent ethical.” Similarly, the authors’ conclude, “One cannot consistently claim that the well-trained soldiers of civilized nations are so bad at following the rules of war that robots can do better, while at the same time claiming that rogue nations, dictators, and terrorist groups are so good at following the rules of war that they will never choose to deploy robots in ways that violate these rules.” Though people are not perfect, the proliferation of autonomous weapons, as with any high-powered weaponry, might present a formidable challenge to peace.

Autonomous weapons, if deployed, would certainly transform the landscape of warfare. Perhaps a system could be developed which incorporates the counsel and recommendations of AI while maintaining human oversight. Thus, decision makers could benefit from the analytical benefits of modern technology, but still retain final judgment informed by experience and context. While benefits may include fewer human casualties and war fatalities, might world leaders be more inclined to approach conflicts with war instead of diplomacy? Are there other pros and/or cons to the autonomous weapons debate that were not identified in the readings? — Marion

13 thoughts on “To Automate or Not to Automate …

  1. The topic of autonomous weapons brings to mind a mini-movie that gained attention last year, “Slaughterbots” (https://www.youtube.com/watch?v=9CO6M2HsoIA). While this not entirely unbiased video does demonstrate technically feasible scenarios, it relies on the assumption that autonomous weapon technology has been perfectly implemented and the human factor entirely determines against whom the technology is leveraged. However, present-day AI is much less reliable due to technological constraints, which is an important factor in the autonomous weapons debate that was not given much focus in the readings.

    Currently, AI relies largely on data to hone its decision-making process. Essentially, it takes as input data for which the results are already known, compares its output to the desired outputs, and adjusts its internal weights until its error is below a threshold that a human operator decides is appropriate. For example, the AI behind an autonomous weapons system could conceivably be “trained” by feeding it data such as video feeds from combat situations and asking it to determine which humans are hostile. However, it is difficult to determine which factors the AI used to make its judgement, which leads to risks along the road. A pressing concern is “bias” in the data; for example, training an AI on data taken from current combat zones could lead to it behaving unexpectedly in novel battlefield situations as its dataset did not contain such information. Researchers have also shown that it is possible to trick AI into thinking objects are other objects; these techniques could be used by opponents on battlefields to mask the fact that they are armed, for example.

    While these challenges arise from technological problems that will certainly be overcome in the future, it is important that current policies regarding autonomous weapons should take into account the fact that present-day AI is far too imperfect to be deployed into chaotic battlefield situations. With that in mind, however, it is also prudent to begin planning for when autonomous weapons overcome their constraints, as then the decision to deploy them will be based on factors that are no longer technological.

  2. I was struck by Ackerman’s insistence of the claim that technology is not inherently good or bad and that the effects of technology ultimately are a consequence of what the user chooses to do with them. Ackerman repeatedly affirms that technology is inherently value neutral and the manner in which it develops and is used is wholly dependent on the human actors that make use of it. Yet I think it is naïve to argue technology is distinct from the human actors that invent it. Technology is imbued with human values.

    One of the largest flaws of Ackerman’s argument is his insistence that robots will in the future be “as good (or better) at identifying hostile enemy combatants as humans.” Ackerman argues this is because the robots will eventually be able to be programmed to follow the rules of engagement and to be extra cautious before engaging with an enemy. This raises the question about bias being programmed into the robots.

    AI technology that is currently being used in non-combative situations is known to have acquired bias—this usually takes the form of racial or gender bias. These biases are most commonly seen in the areas of predictive policing, mortgages and job applications. If these biases are programmed into the robots, it might diminish their ability to effectively identify enemy combatants. This will be augmented if the AI technology does not achieve the capabilities that Ackerman envisions. So I think an additional con for the development of autonomous weapons is the potential for discriminatory use.

  3. I think that for the foreseeable future, autonomous robots will not make for effective soldiers. Target acquisition, particularly in the field of counter-terrorism where combatants and civilians are hard enough for humans to distinguish, is a long way away for robots. As long as most combat applications relate to complicated scenarios such as insurgencies, robots will lack the ability to tell friend from foe. Even if they can be adequately taught to ID targets, it is unreliable that for the near future, they could only be used in situations where civilian casualties are either impossible or not a concern. The latter scenario in particular is very concerning because it implies that current rules governing conduct during wartime are no longer being followed.
    However, after a certain point in development autonomous weapons might have major advantages over manned ones. I would compare autonomous weapons now to guns in the period before the 1700s. At the time, bow and arrows were far superior, but eventually guns so far outstripped bow and arrows in effectiveness and ease of use that countries where firearms were common military tools regularly and easily defeated foes who lacked them. I think countries should seriously consider what might happen in 50 years if they fail to keep pace with foreign development of autonomous weapons.
    The dilemma is further compounded by the potential risks involved. Besides normal developmental risks and problems of target identification, autonomous weapons risk being hacked and turned on their owners. On the other hand, if all warfare utilized drones, minor conflicts between major powers might become bloodless. If the only military targets for localized targets are autonomous, it is possible that nobody dies from wars as long as they do not escalate to the point where civilians are targets.
    Developing autonomous weapons seems to present both challenges and potential rewards, but I don’t feel qualified to take a firm position on whether or not the United States should develop them.

  4. Autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms. Although there are many who oppose the adoption of autonomous weapons because of their use for moral and legal reasons, I believe there are great benefits associated with these weapons. Autonomous weapons are strategically advantageous and offer speed and precision unmatched by any human while reducing the number and cost of soldiers and pilots exposed to potential death. There are many examples of the benefits of these weapons, such as when the autopilot of the F-16 prevents a crash, the use of robots by the Explosive Ordnance Disposal to dismantle bombs, and goods supplied with self-driving trucks. The US Air Force expects the deployment of robots with fully autonomous capabilities between 20205-2047. I believe that stopping the progress of autonomous weapons is not feasible, and instead, we should remain in control of our autonomous weapons and AI in general, meaning that its actions are intentional and according to our plans. Human values such as discrimination between targets and civilians must be incorporated into the design of autonomous weapons.

  5. Both Ackerman and Russell et. al. make valid and interesting points about the pros and cons of autonomous weapons, and it is a type of weapon that combines what we have discussed so far in this class. While the clip we watched in lecture today was a hypothetical scenario, it was an important reminder of the potential destructiveness of autonomous weapons, perhaps even on par casualty-wise with nuclear weapons. The accessibility of autonomous weapons, however, is more like that of cyber weapons, in that, as pointed out in the open letter, autonomous weapons “require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.” The open letter also points out that these weapons would surely appear on the black market, and could very easily fall into the wrong hands.

    Ackerman’s arguments lead me to question exactly how qualified he is to make some of the claims in his article. (For the record, I did some digging, and found that he has a B.A. in Astrogeology and has been a writer for IEEE’s robotics blog since 2007. Keep in mind that Stuart Russell is a professor of computer science and director of the Center for Intelligent Systems at UC Berkeley). For example, his argument that we can’t stop autonomous weapons from being developed, so we may as well make them ethical, seems to lack grounding in actual AI technology. Russell et al.’s response makes it clear that making ethical robots a) wouldn’t be as straightforward as Ackerman makes it out to be and b) would present a whole host of enforcement issues.

    I think a key issue that Ackerman neglects to address is that autonomous weapons, like the other kinds of weapons we have studied in this course, can easily fall into the wrong hands, and no matter how well-programmed the weapons are to “do the right thing,” the people in control of them are operating on a different set of morals.

    Another piece of this debate that I was struck by was the idea of autonomous weapons fighting our wars for us. Ackerman celebrates this idea, pointing to the potential to reduce casualties and ensure accuracy in taking out targets (and sparing civilians), but how far does this thinking go? What does an increasingly autonomous weapon-populated battlefield look like? How does this affect world leaders’ thought processes when thinking about going to war? There are many questions like this that are raised by advances in autonomous weapon technology, and it seems that Ackerman brushes over some of the darker questions in favor of some of the more ‘exciting’ opportunities that autonomous weapons provide.

  6. Personally, I am quite persuaded by the argument that instead of banning the development of autonomous weapons we should work to develop “rules of engagement” that those autonomous weapons are required to follow. Soldiers, even in disciplined, modern armies that make a good faith effort to minimize civilian casualties and penalize bad apples, have a pretty horrible track record of discriminating between legitimate and illegitimate targets. Much of this is attributable to human emotion and inability to process information rationally at times of great stress. Emotionless machines can almost do this much better than human beings.

    I think the most convincing argument against the development of autonomous weapons is they would make resorting to violence easier. First, historical data of deaths as a result of conflicts for the past 1,000 years or so does not appear to bear out that more advance weapons systems (which by and large are ones that create greater separation between the killer and the killed) lead to increased killing.

    Second, and this is where the bulk of my argument lies, I believe there is an argument to be made the autonomous weapons will actually decrease the desirability of war and thus decrease the number of people killed. Martin Van Creveld, a leading military historian, has postulated that the reason humans fight wars is because war is the ultimate stimulation. On a personal, visceral level, war is putting your whole self on the line in a test of prowess. Van Creveld argues this is why a superior force fighting guerrillas quickly becomes demoralizing: there is no visceral satisfaction in vanquishing an inferior foe, especially when doing so does not require putting one’s self on the line – there isn’t that intense stimulation. Additionally, there is little upside to the individual psyche: defeating your foe is not impressive because of the power mismatch and losing is deeply humiliating for the same reason. Perhaps autonomous weapons will make war less likely because it will take the stimulation out of war. Sending one’s robots out to fight someone else’s robots holds little romantic appeal and thus there is a chance that human beings will resort to means other than violence to solve their conflicts. I’m not sure if Van Creveld is right, but I think it is worth thinking about.

  7. This was mentioned in the Russel et al. reading, but I really think it bears repeating: it’s imperative to avoid conflating the artificial intelligence with autonomous weapons. As intuitive as that sounds, it’s easy to fall into the trap of using the many benefits of AI generally to argue for pursuing autonomous weaponry specifically.

    This is to say, that applying AI to all manner of defense systems is most often a common ground between those in favour of and those against autonomous weaponry. Lauding the time-saving benefits of autonomous kinetic defense & identification systems, the massive usefulness of machine learning in intelligence data processing, etc., is largely independent of the decision to arm autonomous platforms. The crux of the debate—and, in fact, the focus of current arms limitation legislature under debate at the UN—is whether to authorize develop and deploy lethal autonomous systems.

    This may sound pedantic, but it’s an important thing to keep in mind so we don’t risk cluttering out debates with non-arguments. Even more significantly, it avoids the trap of an all-or-nothing framing that is decently common.

    I believe it also mitigates much of the concern about the U.S. “falling behind” other nations should we decide not to develop autonomous weaponry. Pushing forward the forefront of autonomous defense systems, autonomous reconnaissance and intelligence-gathering systems, etc. is still within the realm of permissible under the aforementioned arms limitations—still falls under a category distinct from the umbrella of autonomous weaponry. I think one could argue that this significantly decreases the autonomous weaponry “breakout time” of a given nation, but this is a slightly more nuanced argument.

  8. Ackerman’s argument for the development of ethical “rules of engagement” for autonomous weapons seems to be much more reasonable than a ban on autonomous weapons for a number of reasons; the most important being the constant improvement of AI for non-military purposes. While it is hopeful that autonomous weapons can simply be banned, this may not be realistic given that the barrier for entry, the cost that a country must pay to have autonomous weapons, will continue to decrease with advancements in AI and technology.

    The Russel et al. reading calls to attention that there are bans placed on chemical weapons and etc. and this should discredit this argument, but the difference lies within the trajectories of the research in these fields. AI and machine learning is currently advancing towards a stage that can be easily repurposed for military uses. Their advancements, which includes better facial recognition, autonomous drones, self-driving cars, can easily be repurposed for the military. As these facets of AI continue to advance, the dangers of them being repurposed for military use increases. For countries that follow the “rules of engagement” that can be developed for autonomous weapons, their advancement in autonomous weapons, along with their defense against them, will continue to rise. This can create a defense against “rogue” nations that do not follow the “rules of engagement” and ultimately result in greater defenses against possible disasters associated with autonomous weapons.

  9. One interesting consideration that I didn’t see in these readings was the question of responsibility in the event that an autonomous weapon does make a decision to target a civilian or another situation in which it violates international humanitarian law. Ackerman insists that an autonomous weapon would be better at successfully navigating these situations than a human soldier, but even if we accept that to be true, there will still be mistakes made, and when that happens, who shoulders the responsibility for it? The AI cannot be held responsible, but neither can its creator nor the humans monitoring it. Potentially immoral or illegal fatal actions can be taken without any active decision or thought, which I would argue would allow for detachment from military action and possibly make these decisions easier than they should be, given its consequences.

  10. The debate between Ackerman and Russell, et al. was a fascinating one. Both sides lay out cogent arguments for and against a ban on autonomous weapons, that had me at least partly convinced while reading. However, as Russell et al., point out, Ackerman’s argument rests on a faulty premise, and then draws an even more questionable conclusion from it. Ackerman thinks that because there is too much “commercial value” in things like quadcopters, that the technology will develop to the point where anyone can theoretically get their hands on this. Instead of shying away from the harmful uses of the technology, we should be working to make the technology ethical. I found an apt comparison to Ali Nouri’s DNA synthesis technology screening requirement, where the software within systems can be set not to create certain strands of DNA. Given Nouri’s mechanism to counteract the dangerous potential of biowarfare, we can take two paths from Ackerman’s argument: either he is overstating how easy and inevitable it is to get your hands on one of these weapons and thus it can theoretically be controlled in a manner similar to preventing proliferation of nuclear weapons, or it is far too easy, in which case I think it is unreasonable to draw his primary conclusion–that these weapons should follow ethical rules.

    There is no easy way to prevent evil humans from evil things, but it is safe to assume that if dangerous, rogue organizations are given dangerous new forms of weapons, they will unleash destructive power to gain small pieces of land or control over certain areas. As Russell et al note, we cannot expect terrorist organizations or rogue states to follow the rules of the game when they are cornered–a clear modern day example is Bashar al-Assad’s use of chemical weapons against his own citizens. Alas, efforts should be made not only to further complicate AI technology driving autonomous weapon creation, raising the barriers that a dangerous organization has to confront prior to thinking about acquiring one.

    I also think Ackerman conflates the fear among the 2500 AI scientists about the danger of autonomous weapons with fear about AI’s potential positive use for society. Yes, obviously technology can be used for peaceful and sinister means, but these AI scientists are not shying away from AI’s use, as they use it in their everyday lives (a fact that as Ella pointed, Ackerman cannot say for himself). While Ackerman makes a compelling case we are looking in the wrong direction, ultimately his arguments do not face up to the reality of the situation.

  11. Given the recent attention on autonomous weapons, it is natural to have such a heated and detailed discussion of the normative value of its introduction to modern warfare. Marion’s blogpost does a great job at displaying the argumentation of Russell and Ackerman. I would like focus on a specific point that Russell et. al. brings up: it’s one thing to question if autonomous weapons “should” be prohibited/regulated, but I think an equally important question is “can” autonomous weapons be prohibited/regulated.

    As included in Marion’s blogpost, Russell et. al.’s quote “One cannot consistently claim that the well-trained soldiers of civilized nations are so bad at following the rules of war that robots can do better, while at the same time claiming that rogue nations, dictators, and terrorist groups are so good at following the rules of war that they will never choose to deploy robots in ways that violate these rules” reflect their realism in regard to the enforcement of weapons treaties. The way I sometimes like to think about this question is via a graph (I will try my best to describe it with words). On the x-axis is the aggregate benefit of acquiring the weapon, whether it be its destructive capabilities, discreteness, facility of delivery, etc. The y-axis represents how easy it is to develop/acquire these weapons without detection. I think it is fair to say that weapons that are located on the northwest of the chart (weapons that have limited benefits but are easily detected) will be relatively easier to enforce and regulate via international treaties and norms, while those on the southeast (weapons that are highly effective but are difficult to detect) will be nearly impossible to regulated. For example, I believe on this chart, nuclear weapons will be located on the northeast region, given its extreme effectiveness and its high technological barriers in its development. Biological weapons can be said to located southwest, given that while its effects may not be as extreme as nuclear weapons, it is also true that the detection of its development is significantly more challenging. I am wondering where to place autonomous weapons on this chart, given that many aspects of this type of warfare remain unclear (it has endless possibilities, if I may say), I would be curious to know where others will be place autonomous weapons on this chart, especially in relation to chemical, biological, nuclear and conventional weapons.

    Finally, more as an afterthought, I would like to engage in a question – will the extreme development of autonomous weapons take the world to a stage where wars will be fought against these sophisticated machines, without costing human lives at all?

  12. AI technology has not yet advanced to the point at which AI weapons would be effective in combat. Currently, AI technology is commonly used in commuting. Things like ride-sharing apps, and navigation systems use AI mechanisms to determine ride fares, or quickest routes. Obviously, those systems pale in comparison to the complexity of targeting a terrorist threat across the globe amongst a civilian population. However, I do believe it is imperative to discuss the ethical qualms of AI weapons before their development and mass-production. Settling an ethical dilemma now would hopefully influence safe and morally-acceptable integration of AI technology in military defense systems.

    Personally, I see Akerman’s claim of the neutrality of technology as naïve. Akerman argues that autonomous weapons are neither good nor bad, but rather it is the users that have either good or evil intentions. I disagree with Akerman’s point on two accounts. 1) Technology is influenced by the values of its inventors. 2) Lethal autonomous weapons reduce the physical risk of the human user, which could have dramatic consequences on global attitudes on military conflict.

    A recent Bloomberg article (https://www.bloomberg.com/news/articles/2017-12-04/researchers-combat-gender-and-racial-bias-in-artificial-intelligence) notes how AI has habitually amplified stereotypes, in things like Google’s photo software or in deciding whether a party is deserving of a loan. These sorts of biases when applied to autonomous weapons creates the dangerous potential for ineffective combatant identification or unintentional civilian targets.

    Furthermore, autonomous weapons decrease the risk of physical harm to the user. While Ackerman envisions future warfare in which humans are safe and non-targets, I imagine this would not be the case. Ackerman points to the emotional volatility of humans, which can result in decisions with disastrous human consequences. Autonomous weapons increase the potential consequences of human evil, in which case autonomous weapons act as a new form of WMD. The issues of scaling are frightening. Autonomous weapons encourage action in riskier situations, in which human soldiers would not be deployed. This encourages warfare…

  13. Tatiana, I think your discussion on “bias” being programmed into the robots was extremely interesting. However, when bias is “programmed” into AI technology, wouldn’t it keep updating, as it gets input from its environment and actions? My understanding of Artificial Intelligence was that it wasn’t static, and bias would not be as much of an issue, once the AI gets information and system updates from what it sees – thereby not “diminishing their ability to effectively identify enemy combatants”. On the other hand, its destructiveness could also worsen, as AI reinforces learning to eliminate specific groups of people. It is in this way that I think robots can be dangerous: we don’t know how they will act, as their programs continue to change. What sort of regulation can be put in for this type of unpredictability?

    As for the point that Sam brings up, that we should work on “rules of engagement,” I generally agree. I believe that even if banned, the development of autonomous weapons will materialize in the form of private entities filling the gap, and that violence could be easily incited with these weapons. My biggest concern, however, with your (his) point is that these AIs will not be entirely replacing the roles of soldiers. These soldiers and AIs, in my belief, will fulfill very different realms of warfare, in which weapons are deployed for more large-scale destruction – such as nuclear war –, while soldiers continue to fight in “traditional” battles. If artificial weapons and humans are then entirely disparate ‘pieces’ when it comes to wars, why would it be that weapons decrease the romantic appeal of wars? If anything, shouldn’t their huge destructive capabilities make them even more interesting for those who want to destroy others? In general, combined with my previously expressed concern for the regulation of unpredictability, I am more worried about the future of artificial weapons than excited for its more “precise use” (than soldiers).

Leave a Reply