When Disney Meets Dilemma: The Ethics of Self-Driving Cars

As the nascent autonomous vehicle (AV) industry grows, AV stakeholders are already contemplating how AVs should handle ethical dilemmas (e.g. “trolley problems” wherein one must decide the lesser of two evils). AV makers not only need to successfully program cars to “make” ethical decisions; they also need to CHOOSE the ethical rules by which their AVs make such decisions. As Gus Lubin suggests, the latter task entails making judgments on questions such as: “If a person falls onto the road in front of a fast-moving AV,” should the AV “swerve into a traffic barrier, potentially killing the passenger, or go straight, potentially killing the pedestrian?”

Companies have already begun answering such questions, revealing the ethical rules upon which their AVs might operate. For example, as Google X founder Sebastian Thrun revealed, Google X’s cars would be programmed to hit the smaller of two objects (if they had to hit one or the other). As Lubin explained, an algorithm to hit smaller objects is already “an ethical decision…a choice to protect the passengers by minimizing their crash damage.” (It’s presumably safer to crash into a smaller object if one had to crash into one).

The prospect of companies algorithmically programming AVs to choose one’s death over another’s, seemed problematic at first. Playing MIT’s online Moral Machine game, I had to decide whether a driverless car should choose to kill two female athletes and doctors (“stay”) or two male ones (“swerve”). Making such decisions was already uncomfortable because in order to do so, I needed to judge whether one set of lives was more valuable than another. I felt all the more troubled, as I imagined companies programming these value judgments into real-life AVs that could actually kill. Perhaps Consumer Watchdog’s Wayne Simpson felt similarly uneased, as he wrote: “The public has a right to know” whether robot cars are programmed to prioritize “the life of the passenger, the driver, or the pedestrian… If these questions are not answered in full light of day … corporations will program these cars to limit their own liability, not to conform with social mores, ethical customs, or the rule of law.”

Yet human drivers who confront trolley problems must make ethical choices on who to kill/save as well, and we clearly aren’t viscerally hesitant about letting humans drive. (I’d love comments explaining why we reach differently to humans v. robot drivers). In fact, compared to robots, human drivers facing trolley problems might not accurately decide who to kill/save based on a set of ethical principles; they might not operate under ethical principles at all. Rather, humans might panic and freeze, or act only based on self-preservation instincts. Moreover, stepping aside from “trolley dilemma” road situations, as Lubin writes, the “most ethical decision may be the one that gets the most AVs on the road,” given that AVs are on average safer than human drivers. As the WSJ pointed out in August 2016, driverless cars could eliminate 90% of the 30,000+ annual car accidents that are mostly caused by human error (Lubin cited this article).

In light of these concerns and prospects, should the government allow or encourage companies to develop (and sell) AVs programmed to operate upon a set of ethical rules? Why or why not? If yes, who should decide what AVs’ ethical rules are and how? — Eric

The Limitations of TPNW

The Treaty on the Prohibition of Nuclear Weapons (TPNW) adopted by the UN General Assembly in July 2017 established an international treaty framework for ongoing efforts to abolish nuclear weapons. While the NPT obliges nuclear-weapons states to oppose proliferation and “pursue nuclear disarmament aimed at the ultimate elimination of their nuclear arsenals,” TPNW goes further and commits its signatories to never possess or threaten the use of nuclear weapons, among other provisions. For nuclear weapon states to sign, they must eliminate in an irreversible way their weapons programs through a verified disarmament plan approved by the other state parties.

Both Acheson and Mian argue in support of TPNW. Acheson describes the treaty as establishing “a legal ban on nuclear weapons” in defiance of the great powers. In “[changing] the politics and economics related to nuclear weapons,” TPNW gave the rest of the world the ability to assert their opposition to these weapons without relying on unsuccessful disarmament efforts by nuclear weapons states. Further, the efforts of the treaty’s proponents have challenged the legitimacy of nuclear deterrence and the possession of nuclear weapons by any state. Mian focuses on the treaty’s specific provisions and its position in international law. The treaty represents a concerted effort to universalize the view that “nuclear weapons are in fundamental conflict with basic humanitarian sensibilities and international law.” Additionally, Mian addresses the question of verification and how to obtain the support of nuclear weapons states. One of the treaty’s articles provides for the building of a framework by the state parties for disarmament and verification, creating flexibility and more certainty for nuclear weapons states should they choose to consider giving up their weapons. Further, the state party obligation to advance the aims of the treaty with non-party states will increase pressure on nuclear weapons states, both at the intergovernmental and civil society levels.

There are several notable issues with both TPNW and Acheson and Mian’s arguments in support of the treaty. While the non-participation of nuclear weapons states in this process is addressed, both authors do not give sufficient attention to this critical limitation to the movement. In theory, the treaty bolsters the ability of states to “name and shame” nuclear powers, but in practice the strong belief in nuclear deterrence and national self-interest in each state make it unlikely that this treaty or any related advocacy efforts will move the needle on nuclear abolition. Even more critically, the treaty actively harms the ability of the state parties to gain the support of nuclear weapons states for disarmament. TPNW does not allow treaty reservations, and amendments require the support of more than two-thirds of the state parties. Although Acheson might argue that these provisions keep power over this process in the hands of non-nuclear states who initially sign the treaty, it makes TPNW unworkable for realistic arms control negotiations to abolish nuclear weapons. There is little flexibility for nuclear weapons states to use this treaty as a framework for multilateral negotiations, making it a statement of principles, not a pragmatic way to facilitate the actual abolition of nuclear warfare.

There are a few questions on this issue I want to raise for discussion: How effective is an agreement like TPNW without the support of any nuclear weapons states? Have we reached the limits of arms control, requiring more radical measures to reduce the risk of nuclear war? What guarantees and verification regimes would be required for any major nuclear weapons state to consider unilateral or multilateral disarmament in accordance with the goals of TPNW? — Connor

To Automate or Not to Automate …

The world is increasingly experiencing applications of Artificial Intelligence in new and surprising fields. Notably, usage of AI in weapon systems is currently being researched and developed, triggering a polarizing debate. On the one hand, Evan Ackerman argues in favor of autonomous weapons; on the other, Stuart Russell et al. support banning autonomous weapons instead.

Ackerman begins by presenting an open letter from the 2015 International Joint Conference on Artificial Intelligence, which details potential disadvantages of autonomous weapons. The declaration acknowledges that autonomous weapons are relatively cheap, easy to create, and will soon be in production worldwide, but proceeds to offer criticism, nonetheless. Cautioning against using AI in autonomous weapons, experts warn that a “global AI arms race” is impending and dangerous. Ackerman, however, is unconvinced that banning autonomous weapons will be a successful deterrent against nefarious actors. Furthermore, he compares the skills, judgment, and ethics of “armed autonomous humans” and “armed autonomous robots” to one another, a juxtaposition in which the machine edges the man. Ultimately, Ackerman finds more positive attributes in robots than he does in humans: robots are not prone to the vagaries of emotions or fallibility, and ought to, therefore, operate more safely and with fewer mistakes. In the event the robot does commit an error, machine learning can guarantee that each robot in a hypothetical fleet will never make that error again. Ackerman leaves readers thinking whether these autonomous robots can be as ethical as humans, if not more so.

Russell et al. respond to Ackerman by advocating for an international treaty to limit access to autonomous weapons, avoid a potential AI arms race, and prevent mass production of autonomous weapons. Although any ban, including one for autonomous weapon production, would be challenging to enforce, Russell et al. maintain that it would not be “easier to enforce that enemy autonomous weapons are 100 percent ethical.” Similarly, the authors’ conclude, “One cannot consistently claim that the well-trained soldiers of civilized nations are so bad at following the rules of war that robots can do better, while at the same time claiming that rogue nations, dictators, and terrorist groups are so good at following the rules of war that they will never choose to deploy robots in ways that violate these rules.” Though people are not perfect, the proliferation of autonomous weapons, as with any high-powered weaponry, might present a formidable challenge to peace.

Autonomous weapons, if deployed, would certainly transform the landscape of warfare. Perhaps a system could be developed which incorporates the counsel and recommendations of AI while maintaining human oversight. Thus, decision makers could benefit from the analytical benefits of modern technology, but still retain final judgment informed by experience and context. While benefits may include fewer human casualties and war fatalities, might world leaders be more inclined to approach conflicts with war instead of diplomacy? Are there other pros and/or cons to the autonomous weapons debate that were not identified in the readings? — Marion

SuperDemonic Machines: Philosophical Exercise or Existential Threat?

“As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.” That is part of Oxford philosopher Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies. In the first two chapters we read, Bostrom details the likelihood that HLMI (and subsequently ‘’superintelligence’’) is quite near, by offering a short history of AI, descriptions of existing technologies, and expert opinions.

Bostrom makes the case that in the same way AI Superintelligence may seem impossible right now, so developments like agriculture seemed impossible to the hunters and gatherers of years past. That is an argument I’ve heard before, and one that is much less persuasive to me than a claim he makes much earlier in the chapter and demonstrates throughout; that ‘there is an ignorance of the timeline of advances’. To show this, Bostrom works through ‘the history of AI’, going through what he calls “Seasons of Hope and Despair”, from early demonstration systems in ‘microworlds’ beginning in the 50s to neural networks and genetic algorithms that created excitement in the 90s. Keenly aware of potential refutations, Bostrom notes what many have used to argue against him – every period of hope in AI has been followed by a period of despair, and the systems have often fallen short of our expectations. Human-level machine intelligence has been ‘postponed’ as we encounter issues of uncertainty, exhaustive searches, etc. Bostrom does not contradict these assertions, but he does follow that comment with a section titled “State of the Art” in which he details what machines can already do, which he notes may not be as impressive as we think only because our definition of impressive inherently changes as advances continue around us. Remarkably, expert opinions at the end of the chapter give HLMI a 90% of existing by 2100, a 50% chance by 2050, and a 10% chance by 2030. Are those estimates impressive to you? Chapter Two, “Paths to Superintelligence”, is much more technical than the first, and goes through a list of conceivable technological paths to achieve superintelligence, including AI, whole brain emulation, and biological cognitions. These different possibilities, as noted by Bostrom, increase the probability that “the destination [superintelligence] can be reached via at least one of them”. The book asserts that superintelligence is most likely to be achieved via the AI path, though it gives whole brain emulation a fair shot.

As someone who is not familiar with these technologies, reading these two first chapters was rather convincing. It was helpful that Bostrom addressed the difficulties without necessarily making them seem insurmountable, but again, with limited technological knowledge, it’s hard to tell whether Bostrom’s “will be difficult” means “impossible”. The Geist article questions whether superintelligence is really “an existential threat to humanity.” Geist clearly thinks not. Though quite abrasive – “AI-enhanced technologies might still be extremely dangerous due to their potential for amplifying human stupidity” – he makes a series of good points regarding AI and some of Bostrom’s proposed solutions (which are discussed later in the book). Geist notes that as people have begun to research AI, they have discovered fundamental limitations that, despite posing no threat to HLMI, deem ‘superintelligence’ extremely unlikely. He says that in discussions of AI, we have often conflated inference with intelligence, citing the General Problem Solver as an example. He discredits Bostrom’s idea of dealing with potential superintelligence issues by giving AI ‘friendly’ goals and keeping it sympathetic to humans, noting that even if superintelligence were an issue, it is unlikely that Bostrom’s approach to the “control problem” would work because of goal mutation. Most convincing, however, is Geist’s distinction between reasoning and other elements of human intelligence, which is what I found most absent from Bostrom’s account. Specifically, Geist notes that “While [recent] technologies have demonstrated astonishing results in areas at which older techniques failed badly—such as machine vision— they have yet to demonstrate the same sort of reasoning as symbolic AI programs.”

Though I in many instances appreciated Bostrom’s clarity, and don’t mean to say that his definitions of terms such as ‘superintelligence’ (which he defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”) are incorrect, I do think that in our discussions of these issues – especially if we subscribe to Bostrom’s rather catastrophic thesis – we must be careful to articulate what exactly it is we are scared of, in order to both estimate how likely that threat really is and strategize a good way to deal with it. As Geist notes, “Nor does artificial intelligence need to be smarter than humans to threaten our survival—all it needs to do is make the technologies behind familiar 20th-century existential threats faster, cheaper, and more deadly”. Perhaps focusing on more pressing challenges, “…such as the Defense Advanced Research Projects Agency’s submarine-hunting drones, which threaten to upend longstanding geostrategic assumptions in the near future”, is more helpful than worrying about these potentially demonic super intelligent machines (Thoughts?). While I think a fair analysis of this issue requires more technical information, I’d be interested to hear your thoughts on some of these questions:

Do you have differing opinions regarding the possibility of HLMI and superintelligence, and what do you think of Geist’s point that we sometimes conflate “inference with intelligence”? How far away do you think HLMI is (if possible), and what is the ‘path’ you find most compelling? Supposing superintelligence is possible, do you see it as a threat to humanity, and if so, how serious of a threat? Is HLMI, or even AI, a threat in itself? Why or why not? Consider the first sentence of this post – is the comparison to the relationship between humans and gorillas a good one? Most importantly, if considered a threat, what do you propose we do to deal with it? Do you see a distinction between reasoning and other cognitive capabilities, and does this change what you think about the possibilities for HLMI/superintelligence? — Maria

Reagan’s Strategic Defense Initiative: Worth the Money or Too Little Too Late?

President Ronald Reagan’s Address on Defense and National Security on March 23rd, 1983, proposes new ideas for the security and defense programs in the United States with regards to nuclear weapons. As Reagan argues, it is better to save lives than avenge them, and emphasizes the importance of continued defense spending. In many ways, the address is a quintessential Reagan speech, reiterating the importance of deterrence and trying to sway public opinion using powerful rhetoric, but he goes further to propose new ideas and challenge previous policies. Reagan’s proposal of the Strategic Defense Initiative at the end of his speech, also known as “Star Wars” disrupts the previous strategy of mutually assured destruction. Ultimately, Reagan’s address reinforces his actor persona, seeing that the SDI technology appears unrealistic and creates a false sense of security for the American people.

Reagan attempts to justify government spending on the weapons program by giving statements like “cuts mean cutting our commitment to allies” and “the United States does not start fights. We will never be an aggressor.” He argues although he came to Washington “with the intention of lowering government spending,” this spending is absolutely necessary, and he even talked to specialists and other officials to justify the budget. It is clear Reagan is trying to convince the American people to support him, offering up arguments that he is bringing “a new hope for our children” and going to “make America strong again” (an interesting foreshadow of Trump’s popular slogan). Reagan was perhaps worried about public opinion and implemented the Strategic Defense Initiative following growing hysteria created by the filming and eventual release of the movie The Day After and growing concerns of a nuclear attack by the Soviets.

Reagan’s SDI proposal advocates for the development of technology that could, in theory, be directed from satellites, airplanes or land-based installations to shoot down missiles. He describes potentially using lasers, particle and projectile beams and other new forms of technology. While the idea was great in theory, and Reagan suggested it would take time to develop such technologies, the program appears unrealistic and beyond the capabilities of scientists at the time. Furthermore, the program would have been incredibly expensive, and may not have even been that efficient. This speech reinforces Reagan’s actor and storyteller role as a president, and seems to be a way to try to help Americans recover from their fears.

Some questions I had after watching this speech, and hopefully others can respond to are as follows:

While congress eventually decided to end the initiative and not pursue the program Reagan suggested, should they have continued to try to find a way to develop the technologies to actually be able to shoot down missiles, or was it not worth the money or time? And is Reagan’s devotion to military spending justified, or is the billions of dollars the government has subsequently poured into the defense program too much? — Adrienne

Misplaced Urgency in U.S. Missile Defense

The Grego reading points out a major logical inconsistency in the approach taken to missile defense by the Bush and Obama administrations, and reveals the serious costs of this faulty policy. Both presidents allowed the Missile Defense Authority (MDA) to research and implement Ground-based Midcourse Defense (GMD) systems without the usual levels of supervision or budgetary restraint. For example, rather than outside evaluations, the main body responsible for oversight of the program has been the MDA itself. Furthermore, the MDA has been allowed to use research and development funds for most MDA expenses, including creating the interceptors themselves, which are less restricted and supervised that the procurement funds that these expenditure would usually come from. As a result of this loose, unrestricted approach, the MDA has followed a very different timeline than most development processes for new defense technology. Rather than first testing the defense systems and then implementing them, Grego and her co-authors point out that “nearly all of the interceptors of the GMD system were fielded before a single interceptor of their type had been successfully tested”.

The reasoning behind this approach is that due to the nature of the threat (nuclear attacks from rogue states or non-state actors), time is of the essence and therefore the usual processes for defense research and acquisition must be superseded. However, this logic doesn’t really hold. In order for these missile defense systems to be an urgent need, they must first be effective. To use a preposterous example, no President or military official would argue that the United States should construct huge mirrors to reflect the missiles that could be launched against the country, as the effectiveness of these mirrors would be highly questionable and therefore putting them in place would do nothing to help the current security of the country. While this example is clearly absurd (as mirrors would never be an effective missile defense system, while interceptors may be in the future), the general point is the same. As long as these interceptors aren’t truly effective, they aren’t urgent, as they won’t actually help with present day security. Urgency is a good argument for a large research budget, but it is no reason to eschew the usual testing process and oversight that is usually required before implementation. In fact, this is especially true in a field like nuclear defense, where anything but complete success is essentially failure. If the new technology were, for instance, a new gun, that wasn’t yet as accurate as hoped, it could still be useful. However, in nuclear defense, a single failure to shoot down an incoming missile, resulting in a nuclear strike on the U.S., would be disastrous. Therefore, if anything, the testing requirements before implementation with missile defense should be stricter than for other technologies, rather than cast aside.

Beyond just the obvious financial costs of this approach, namely spending taxpayer dollars on a program which has not proven to be effective, Grego and her co-authors point out that this policy has serious, broader costs as well. With a missile defense system having been implemented, foreign policy figures in the United States may feel a false sense of security and act more aggressively in their dealings with nuclear countries, despite the fact that the system doesn’t truly work. Furthermore, this may also lead other states with nuclear weapons to build up and further develop their arsenals, in response to what they see as a shift in the balance of power with regard to deterrence.

Of course, as the authors discuss, none of this means missile defense isn’t worth pursuing at all. Given the extraordinary benefits of an entirely successful missile defense system (complete security from nuclear attack for the whole country), this system is surely worth developing. However, implementing one under little supervision or budget leads to serious costs, both financially and in terms of its effects on policy, as it potentially leads to false confidence in our diplomats and military leaders as well as increased weapons development on behalf of our foes.

Do you agree that the approaches of the Bush and Obama administrations, justified by ‘urgency’, are logically inconsistent? Do you think a better approach which began implementation only after proper testing could solve the concerns raised above, or do you believe they are inherent in developing a missile defense system no matter what course is taken? — Alan

Rational and Irrational Masculinity

The Cohn journal article makes two main claims about “technostrategic” language. First, a lot about the language, such as its abstraction, serve to distance nuclear decision-makers from the human costs of nuclear attacks. Second, although nuclear decision-makers associate their “technostrategic” language with “cool-headed objectivity”, it contains sexual and masculine undertones and can hardly be called objective. I largely agree with Cohn’s ideas about the existence of distancing mechanisms in the language and how they can lead to the devaluation of human lives, but I believe that there are important attributes of these distancing mechanisms that cannot be easily dismissed.

President Trump’s “my button is bigger than yours” Twitter feud with Kim Jong Un raises concerns about the potentially disastrous effects of masculine impulse. Masculine impulses such as sexual domination could lead countries to take overly aggressive or confrontational stances, and the thirst for power could lead a country to build up a lot more arms than necessary. I would hardly consider that rational behavior and structural controls have to be put in place to prevent important decisions from being driven by such impulses. A possible control measure could be the Markey-Lieu Bill which prevents Trump, and his rampant masculine impulses, from employing nuclear weapons unless Congress has declared war and provided authorization for their use.

However, I believe that there are aspects of nuclear decision-making that demand the exhibition of masculine qualities like emotional detachment, rationality and the ability to be unswayed by “soft” concerns” like human costs. The distancing mechanisms in “technostrategic” language facilitate this by alienating nuclear decision-makers from the human costs of their actions and are hence, in some ways, necessary. At the levels of government where incredibly difficult decisions on whether or not to initiate a nuclear strike are made, it seems almost necessary to have distancing mechanisms. Having to contemplate the thousands of lives that could be lost as a result of a decision would be so emotionally overwhelming to decision-makers that they could be paralyzed from being able to make any kind of decision at all, or suffer a complete emotional collapse. In nuclear-decision making, impossible decisions might have to be made, like having to kill thousands of people in another country to protect one’s own. Distancing mechanisms ensure that a decision can be made while preserving the sanity of the decision maker.

Moreover, from an international relations perspective, the possession of masculine qualities by nuclear decision-makers can be advantageous. The world would certainly be a more peaceful place if every nuclear decision-maker prioritized human costs, but there would always be an incentive for a country to detach itself from the human costs of nuclear strategy so as to gain a strategic advantage (a similar situation to nuclear proliferation). A country that portrays itself to be unswayed by human costs would be able to more easily extract concessions out of the other countries that do. One reason why North Korea has so much bargaining power with the US is that the US fears the human costs of a nuclear attack a lot more than North Korea. Masculine detachment from human costs simply makes it harder for a country to be taken advantage of and improves its diplomatic position.

Do you think distancing mechanisms in “technostrategic” language, as well as the association of masculine qualities with nuclear decision-making do more harm than good? In other words, if a country were able to unilaterally reform itself so that human costs became a far more prominent aspect of its nuclear-decision making process, would it be better or worst off in its international relations? — William

Economical with the Truth: Concealed Justifications for Trump’s Nuclear Posture Review

In February 2018, Donald Trump’s Department of Defense released the Nuclear Posture Review (NPR), a comprehensive document that describes the role of nuclear weapons in U.S. security policy, as seen by the incumbent administration. The previous NPR was released by Barack Obama back in 2010, and it is safe to say that President Trump’s version sees a rather different nuclear future for the United States to that of his predecessor.

The foremost aspect of the 2018 NPR is its push for more nuclear, specifically low-yield, weapons. The given rationale for this policy is as follows: Russia has a greater number and variety of these low-yield weapons than the U.S., which supposedly creates a Russian advantage at lower levels of conflict. In other words, American weapons are too destructive to deter an attack by smaller nuclear weapons. By this logic, it makes sense for the U.S. to add a variety of low-yield weapons to its nuclear arsenal, whose potential use would be more likely.

However, the above logic is based on a weak assumption – the notion that the acquisition of low-yield nuclear weapons is necessary to counter a Russian threat. To make this assumption more convincing, the NPR includes and omits factual information as appropriate in order to support its agenda. For example, it claims that Russia has been making “nuclear threats against [American] allies” (p. I) without citing any convincing evidence, but does not give importance to the fact that the U.S. has nuclear weapons in five European countries, in close proximity to Russia. Also emphasized is the 85 per cent reduction in U.S. nuclear weapon stockpiles since the Cold War, whereas a similar reduction on Russia’s part is ignored. In fact, whilst the policies presented in the NPR are not inadequate per se, the bulk of their true justifications are shrouded with the expected noble rhetoric of “we are responding to threats from Russia/North Korea/China”. Quite how one can argue that North Korea’s nuclear proliferation cannot be matched by the existing U.S. nuclear arsenal is inexplicable.

This brings me to my next point, which is that the United States already has over 1,000 nuclear warheads with low-yield options. This fact is omitted in the NPR, a move that raises more questions than it provides answers. Further, the document argues that adding more low-yield weapons will raise the nuclear threshold, a claim that is understandably controversial, though not necessarily misguided.

Lastly, as we learnt earlier in the course, nuclear weapons of lower yield are more efficient in their usage of fissile material than higher-yield weapons. So, whilst it is easy to view low-yield weapon proliferation as less threatening than high-yield advancement, the opposite may well be true. It is also worth remembering that “low-yield” means under 20 kilotons, which would classify the Little Boy dropped on Hiroshima as a low-yield weapon. No fewer than 70,000 people died in Hiroshima.

I will not scrutinize the NPR’s policies themselves, but I do have time to blast their justifications. My key question is this: to where should we trace the true motivations for the new, proliferating direction of the 2018 NPR? The following is my best guess (please read with some levity).

In his first year as president, Donald Trump made clear his determination to dramatically increase the U.S. nuclear arsenal. This was unsurprising given his usual bad-boy demeanor. However, when the time came to discuss the plan with Secretary of Defense Jim Mattis, Secretary of State Rex Tillerson and Joint Chiefs of Staff Chairman Joseph Dunford, Trump’s radical ideas fell on deaf ears. Nonetheless, Mattis, Tillerson and Dunford could not entirely ignore the president’s vision, so they agreed on a compromise which eventually materialized through the NPR. The emphasis was chosen to be on low-yield weapons as they appear less threatening, but Trump would have been advised of their high efficiency and tactical capabilities. As for policy motivations, the true reasons of maintaining world hegemony and political status were shrouded by exaggerated, but believable threats from Russia, China and North Korea. In the end, nobody could surely have been surprised.

I would love to hear your versions of the motivations behind Trump’s NPR policies. If you have the time, please do leave a comment. — Sergei

Disarmament Verification: A True Partnership?

In the IPNDV’s (International Partnership for Nuclear Disarmament Verification) Deliverable One: A Framework Document with Terms and Definitions, Principles, and Good Practices, Working Group 1 addresses the rationale of verification principles, and identifies proper usages of these principles in real life. The examination of these principles are based on “existing verification mechanisms,” “work already done by previous disarmament verification initiatives,” and “existing research and publications” (2). While providing a good overview of the different principles involved in verification, as a first deliverable, the output raises two problems: first, it provides little direction for future advances, and second, it muddles the line between national and multilateral interests and inspection.

Throughout the entire deliverable, the one part in which concrete action plans are laid out is in the summary of Principle 3 – Non-Proliferation, and the limitation of transferring proliferation-sensitive knowledge. In this section, the authors argue that the IPNDV should “identify options to prevent the transfer of [sensitive] data for monitoring technologies,” and present possible methods to do so (5). This productive analysis of state of affairs, coupled with a recommendation identifies problems with the current system and attempts to solve them, which advances the work of IPNDV. Unfortunately, the other principles only offer somewhat-obvious statements that re-confirm the reason certain rules are in effect, without bringing about additional value for the IPNDV. Although the paper itself does state that its purpose is mainly descriptive, I cannot help but think that such a collection of simple normative statements cannot be helpful in helping the current IPNDV team, or propelling their future work. I would love to hear about whether you guys agree/disagree!

More importantly, my biggest qualm about this deliverable was the lack of clarity it gave on distinguishing national and multilateral affairs and interests. While Principle 4 states that the level of interference of verification is moderated, and capped by the “international legal system, [which is] based on State sovereignty” (6), Principle 7 later states that multilateral verification – such as the one described in Principle 4 – puts “multilateral entities “above” the parties” (10). Although later followed by the statement that “multilateral agreements are never entirely multilateral,” as there is an element of national verification (use of NTMs), the deliverable goes back and forth on whether the fundamental basis of multilateral verification rests upon State sovereignty, or multilateralism itself. Taken together with my earlier point, it seems to me that the deliverable is a not-so-effective document that can propel the IPNDV forward. — Christine

Crowdsourcing Nuclear Arms Verification

“Crowdsourcing” and “Nuclear Arms Verification” these two terms are rarely seen together in the same sentence. One invokes the image of the ambitious youth at Silicon Valley, while the other seem to be a term only applicable to in extremely limited realms of D.C., Moscow and Vienna. Yet, as the readings for this week suggest, this idea being incorporated into future nuclear arms control treaties may not be too far-fetched.

The JASON report explores exactly this idea of taking advantage of crowd-sourced data management to effectively verify compliance and detect violations to arms controls regimes. The report takes notes of two possible avenues of such scheme – crowd-sourced data gathering and crowd-sourced data processing. The former refers to schemes in which individuals are encouraged or incentivized to gather and share relevant information, such as measurements, images, etc. In the latter scheme, individuals make use of public data (e.g. satellite imagery) to identify certain trends/patterns that could aid in detecting treaty violations. In both of these cases, recent technological advancements have magnified their potential and effectiveness. The JASON report notes that the pervasiveness of smartphones among ordinary citizens have created a reservoir of photographs at an unprecedented scale. Similarly, projects and firms such as PLANET illustrate how technological advancements, in this case small scale satellites, have greatly enriched the information available in public sources, which can then be exploited via crowd-sourced schemes. The author notes how the key to constructing an effective crowd-scheme is to provide participants with the means (e.g. public data and, a clearly defined outcome metric. Thus, this could be applied to nuclear arms control treaties, so long as there is a limited scope of irregularities to look for.

Needless to say, this idea of “crowd-sourcing” nuclear arms verification comes with significant risks and drawbacks. The JASON report makes a point that unlike earlier crowd-sourced information gathering schemes for terrorist activities, crowd-sourced nuclear arms verification may entail citizens effectively participating in the verification of their own country. These factors necessitate a strong ethical and legal framework that would protect the individuals participating in the crowd-sourcing, as well as preventing harmful information disclosure. Furthermore, touching upon the Woolf reading, past frameworks for arms control verification devote significant attention of strictly limiting the information sharing to those necessary to detect violations of the treaty. Should this framework be opened up to the public for crowd-sourcing, a monumental challenge would be to maintain the boundary of the public’s activities to that relevant to the treaty, and to prevent the public from uncovering classified information.

The promises and consequences of crowd-sourced arms verification, the main question I would like to pose in this blog post is the following: would the United States (or any other country) be compelled to adopt this framework for future treaties to prevent other nations gaining a comparative advantage? So far, the START treaty and the new START treaty between the United States and Russia rely on bilateral verification via NTM. Following the proverb “if it’s not broken, don’t fix it”, it makes sense for future regimes to continue to adopt this bilateral verification scheme unless otherwise. On the other hand, for offensive and defensive capabilities, technological advancements continually compel nations to update their arsenal to prevent their adversary from gaining an edge in its competition. This leads to the question – would there be a scenario in which the United States would be placed at a disadvantage because it will not adopt this new initiative of crowd-sourced verification/surveillance? — Kouta

A New START: Changes in US-Russian Arms Control

In Amy Woolf’s report on Monitoring and Verification in Arms Control for the Congressional Research Service, Woolf analyzes the strategies behind monitoring and verification before doing a deep dive on the START (Strategic Offensive Arms Reductions Treaty) treaty to compare the differences between the new treaty that went into effect in 2011 and the START treaty that preceded it, from 1991.

One of the major aspects of the relationship between the two countries is the exchange of information so that each party can monitor the other. Both the United States and Russia use national technical means of verification (NTMs) including satellites, radar, and electronic surveillance to gain information about the others’ capabilities, but the treaty provides for data exchanges that provide even more information and create an atmosphere of transparency and understanding. In the 1991 START, one of the main aspects of this data exchange was the exchange of broadcast transmissions from missile tests. Analysts agreed that analyzing this data gave them a better understanding of the capabilities of both sides missiles, providing information about the weight, length of time fuel burned, and the number of times reentry vehicles were released that would have contained nuclear warheads. In the negotiations for the 2011 treaty, Russia pushed back on these provisions, arguing that it created an unequal obligation because they were developing new missiles, while the US was only occasional testing older missiles. Eventually, an agreement was reached that they would exchange information on an equal number of launches, but no more than 5 launches, in a calendar year. In the original treaty, both sides had uncertainty about the number and capabilities of the others’ arsenal, but after 15 years of monitoring, both now have a better understanding of the number and capabilities, making this somewhat less important.

In terms of verification, the 1991 treaty came at a time when many Americans believed that the Soviet Union could have incentives to violate the verification aspects of the treaty to gain an advantage over the United States. As a result of this uncertainty, the original treaty contained many provisions that were designed to detect efforts to hide or deploy extra missiles, particularly with the uncertainty about how many missiles each side actually had. These measures included on-site inspections to verify the number of mobile ICBMs and warheads assigned to missiles, as well as random, short-notice inspections to deter hidden movements. While these were not intended to provide an exact count on the total number of mobile ICBMs and warheads the other party had, the goal was to limit the breakout potential of the other party. The 1991 START contained provisions for 12 different types of on-site inspections, each with different abilities that covered different goals. One of the major changes that Woolf discusses is the consolidation in the 2011 START into 2 types of on-site inspections that achieve the same goals as the previous inspection types, but trims them down into more simple types of inspections that can cover many areas. The overall goals of the changes to the verification were to reduce the complexity and costliness of the inspections for both parties while understanding that the need for stringent verification and monitoring was not as high as it has been before. Are the changes made to the new START treaty smart considering how US and Russian relations have changed since the 1991 treaty, or should the US have pushed harder to keep the treaty stricter on things like missile test data? Is the move to a less complex and costly verification framework smart? I think that given the familiarity the countries now have, the easing of some of the treaty’s provisions show that the US is willing to cooperate and make concessions to keep the trust of other countries.

I thought this article brought up interesting points about how the increased familiarity between the United States and Russia lead to the changes in the new START, and some interesting questions about this cooperative effort. The main argument Woolf mentions against the treaty is that the US could be giving itself a strategic disadvantage by giving so much information to Russia when Russia’s nuclear arsenal is already aging and no longer poses the immediate nuclear threat that it used to, nor is there as much concern about Russian incentives to violate the treaty. Through the use of NTMs, some argue that the US can already monitor Russian weapons systems, so the treaty is not necessary. Woolf counters by highlighting the value of international cooperation and the continued building of trust between the US and Russia, and also that the treaty shows the United States’ commitment to its obligations under the NPT. By cooperating and sharing its information, the treaty can provide benefits by convincing more nations to join the US in strengthening the NPT and isolating rogue nations like Iran and North Korea. These benefits would be difficult to measure, and I’m not sure I agree that this is an benefit of continuing with START. Do you think the benefits of an arms control treaty outweigh the potential loss of strategic advantage that could come with sharing our own information with Russia? — Nikhil

The Offense-Defense Balance of Cyber

Cyber has typically been seen to have a very lop-sided offense-defense balance—with offense coming out on top. This is partly because of a function of the probability; defense must account for all possible avenues of attack but offense has to find that one single route to vulnerability. Rebecca Slayton addresses the issue of offense-defense balance in cyber by conceptualizing the issue in terms of utility—a shared feature of different modes of offense-defense balancing.

Several key insights drive her analysis. The cost of cyber operations depends not on the features of the technology alone, but also on the skills and competence of the actors and organizations that create/use/modify information technology. For example, ‘ease of use’ or ‘versatility’ of information technology seems to favor offense, but that property arises form interactions between technology and skilled actors. The operation might be quick but the construction and deployment of cyber weapons is a slow, laborious process.

Overall this implies that the utility of cyber operations differs in some serious ways. For example, a tight coupling of individual skills and information technology makes the economics of producing cyberweapons different than conventional physical weapons. The skills of the programmer have a huge effect on the efficacy and construction of the weapon. Software is continuously modified. And code takes the shape of a ‘use and lose’ weapon — once identified, it becomes obsolete. Thus, you need continued investment and skill to develop the weapons. The cost of the programmer is not accounted for in offense-defense balance analysis. The competency of managers is also important—defense failures often must do with personnel failures or out of date software. The success of offense is due to poorly managed defense. Attacks also need expensive infrastructure to be put into place—the actual attack itself might be cheap but the research and implementation of infrastructure is not. The complexity of the defense target–which increases defense costs–also increases offense costs to understand the complex system. Accessing physical effects through cyber is hard to accomplish as well. Attacking industrial control systems at a strategic point in time requires persistent communication–something hard to accomplish in such a system when deploying the cyber weapon.

A look at Stuxnet shows the high cost value of attacking–much more so than the actual defense, however the goal was considered significant enough not to quibble over the cost. The actual effect was negligible—delaying the Iran nuclear program by 3 months rather than years, whereas the cost to the US was relatively high.

I think this article raises some very interesting points about the perceived cost of offense. Often we conceive of cyber as being ‘cheap’ warfare because of the ease with which code is copied – but the constant updating and the initial conceiving of it has huge talent costs. I wouldn’t discount the high offense value of cyber necessarily though. Consider the recent situation with cyberwarfare and the 2016 US election. There was an interesting strategy taken of not directly affecting physical domains (like ICS)—instead, the focus was more so on disinformation and social media. Slayton herself acknowledges that the value of a defense target is variable in relation to the social network it is embedded in—but I think even she would pause at how to calculate the cost when it is the social network itself that is the direct target. To be sure, the disinformation cost millions to implement. Yet, the defense cost is hard to ascertain and depending on your point of view it could range from astronomical to relatively benign.

I think this also raises some questions about what constitutes a cyber offense. I have been implicitly assuming that using information technology to disseminate false information counts as an attack. The article itself focused purely on software integrity, however. Do you think that constitutes a cyber attack? If so, what are other novel ways that cyber can impact society writ large—beyond the focus of disrupting software systems. — Kabbas

Norms of Cyber Behavior

In his paper, Deterrence and Dissuasion in Cyberspace, Joseph Nye covers the challenges of deterrence in cyber warfare. Nye defines deterrence as anything that prevents an action by convincing the actors that the cost of an action outweigh its benefits (Nye 53). Nye argues this broad definition better captures the breadth of options available to states to prevent cyber attack, and he discusses four of these options, including “threat of punishment; denial by defense; entanglement and normative taboos” (Nye 46) in his paper. From these four options, Nye argues there is no “one-size-fits-all” (Nye 71) deterrence strategy for cyber attacks, and that traditional understandings of deterrence theory must adapt to respond to emerging technological threats.

The bulk of Nye’s paper is spent explaining four possible types of cyber warfare deterrence: “threat of punishment; denial by defense; entanglement; and normative taboos” (Nye 46). The first two – “threat of punishment” and “denial by defense” – fall into traditional understandings of deterrence (Nye 55). Punishment for a cyber attack could entail response in in kind, with economic sanctions, or with physical force (55). Denial by defense could entail heightened monitoring of threats and creation of cyber security, intended to convince attackers an attack was too costly to execute (57). Both these strategies are limited by the fact that the originators of cyber attacks are often anonymous (50-51) and “persistent” (57), making it difficult to respond to all potential cyber attacks effectively.

The second two deterrence strategies, “entanglement” and “normative taboos” (46), fall into a broader model of deterrence. Entanglements of modern states’ interests reduce the likelihood of attack because an attack could be detrimental to the attacker’s state as well (58). Entanglement is a particularly strong deterrent between large, economically dependent states (58). “Normative taboos” (46) reduce the likelihood of attack because an attack damages the prestige and “soft power” of the attacking state (60). Norms against attacks on civilian infrastructure may be particularly strong deterrents (61). Taken together, these four strategies could be used to prevent cyber attacks.

Of all the strategies, I was most interested in the “normative taboo” method of deterrence. Last week, we had an interesting discussion normative (“humane”/”inhumane”) constraints on bioweapons. To me, creating and enforcing norms for cyberwarfare is even more challenging, because the real-life consequences of virtual actions often feel more remote than those from real-life actions. People are often more willing to pirate a movie than steal a physical copy; kids are often more willing to bully their peers online than in person. And unlike the case of nuclear bombs or deadly pandemics, we haven’t yet seen large scale destruction from cyber attacks. I am interested to learn more about establishing cyber warfare norms from the other readings – and from all of your exciting replies! — Grace

Nukes and Germs: Comparing Nuclear Weapons and Biological Pathogens

The second half of readings for this week focus on new developments in the field of biological warfare. The Letter to the President outlines the emerging threats: cheaper, more effective technologies, a better understanding of how to use them, and the US’s inadequate defensive measures. They recommend a network of early warning systems, bolstered domestic public health capacities especially in identifying and producing responses to pathogens, monitoring of outbreaks in other countries, cooperation and aid to countries who also lack sophisticated countermeasures to either biological attacks or natural disease outbreaks.

A few things struck me about these recommendations when compared to what we’ve studied so far with nuclear weapons. Our policy aims and recommendations to deal with nuclear threats are mostly preemptive: prevent countries from acquiring weapons, and, for those that have them, reduce the chances that they’ll use them. The recommendations concerning biological weapons thought are primarily reactive though. Aside from some mentions of establishing best practices in research, the countermeasures above address useful preparation systems.

The Nouri and Chyba reading was unique in that in did recommend trying to preempt proliferation of biological agents using software design. I’m skeptical about that approach though. Aside from the fact that the paper, dated from 2009, doesn’t address CRISPR developments, I think that putting absolute faith in software updates, from a computer science perspective, seems sketchy at best. Can we really meaningfully prevent deliberate development of dangerous biological weapons that way? It seems like the biggest barrier is simply the expertise it would take to develop them successfully, which the Ledford readings implied was a rapidly shrinking roadblock.

I think the ultimate driver behind this difference is that, unlike nuclear weapons, there’s no clear bottle neck for the production of biological weapons. Moreover, it seems like the development of legitimate research technology improvements will necessarily make biological weapons easier to make. In fact, our readings about the recent CRISPR developments seem more concerned about accidents than deliberate attacks, and some scientists, in the Bohannon reading for example, implied that it would be better to figure out what’s possible than risk being caught off guard. Nuclear threats seem totally different from biological ones then. With nuclear weapons, we face a constant and pretty simple danger: either being blown up or starving in a nuclear winter. There’s also no precedent for their use in combat after WWII. On the other hand, biological weapons have been used before, as recently as 2001, and they present a mostly unknown and variable threat. Are we more afraid of the unknown they present and fear they crate than of their destructive power? It’s unclear to me exactly why the early prohibitions against them in the 20’s came about otherwise. And although most governments abandoned their biological weapons programs, it seems they did so because they weren’t as destructive or practical as they’d hoped. Can we do anything more than prepare ourselves for a biological attack or accident, and does one seem inevitable given the decentralization of potent new technologies? — Stew

Ethical Distinctions in Wartime: The Case of Biological Weapons

In the introduction and first chapter of her book Biological Weapons: From the Invention of State-Sponsored Programs to Contemporary Bioterrorism, Jeanne Guillemin traces the history of biological weapons programs from their inception with French research in the 1920s all the way through to the 21st century. In order to frame key developments in the realm of biological warfare, Guillemin splits the era into three historical phases: an “offensive phase” when both production and possession of biological weapons were legitimate and widely practiced (roughly 1920-1972), a later period of total prohibition based on international law coming out of the Biological Weapons Convention (1972-early 1990s), and a third defensive stage following the end of the Cold War characterized by “tension between national and international security objectives.”

In clarifying the significant differences between chemical and biological weapons, Guillemin calls upon the Rosebury-Kabat report published in 1942, noting six unique features of biological weapons among which are their delayed effects, contagious nature, and dependence on a mammal host for virulence. Despite their many differences, both chemical and biological weapons underwent a similar timeline with regard to shifts in public perception. Early in their developmental history, both were seen by many advocates to be more humane than conventional arms, as they “avoided battlefield blood and gore,” thereby constituting a “higher form of killing.” Public opinion rapidly shifted, however, after horror stories covering the use of chemical weapons in World War I made their way home and influenced the 1925 Geneva Protocol which banned the use (but not the production or possession) of chemical or biological weapons.

This progression of public opinion to characterize some weapons as inhumane and others as totally legitimate raised several questions for me during my reading of Guillemin. I felt that this distinction could at times appear quite arbitrary, particularly in the case of U.S. policy during World War II. FDR himself, according to Guillemin, felt strongly that chemical and biological weapons were “uncivilized and should never be used,” an interesting sentiment coming from the man who would ordain creation of the most destructive weapon the world had ever seen. I wonder how we are meant to set internally consistent distinctions of “humane” versus “inhumane” weapons of war. Is it a matter of scale? One of suffering? Perhaps of physical detachment on the part of the aggressor (as can be seen in the current debate on drone use)? Should the 20th century doctrine of “total war” which “blurred the lines between enemy soldiers and civilians” persist into the 21st, or do the complexities of modern warfare merit a clear moral distinction between the two? What truly qualifies as “mass destruction,” and how does that label at once delegitimize some avenues of warfare while solidifying the validity of others? — Wesley