“Strange Game … The Only Winning Move is Not to Play.”

For all of us who have watched the movie Wargames, we remember the iconic final scene in which a computer analyzes every possible scenario concerning all-out nuclear war. Bruce Blair’s article on strengthening checks on presidential authority draws valid points about our current response structure to a nuclear attack. In short, a considerable amount pressure is placed on the president and his closest advisors in a very narrow window of time as to what strategy the US would implement in the response given a multitude of contingencies offered by military strategists. Bruce Blair, as the co-founder of Global Zero, is against this notion that such world-ending power should be allowed in the hands of such few individuals including our current president (or any). For his proposed solution, he optimized democratic checks and balances upon the president through Congress whilst minimizing “use it or lose it” forces such as missile silos, making the US response one that would take deliberation concerning legality, ethics, and logical consideration. Alongside this is a push to eliminate land-based missile silos and remove nuclear strikes as a counter to non-nuclear threats.

While I agree that basing the US’s nuclear strategy upon subs and other mobile launch platforms, I can see the logic of maintaining a “base-load” of silos as a deterrent in itself. What is your incentive for nuclear war if you know that your opponent is willing to use their nukes rather than lose them. It is nearly a guaranteed reaction, as opposed to a deliberated response which would increase the time to decision and quite possibly have ethical issues as the time-horizon of justified M.A.D. passes with every minute. In a sense, the peace brought by the age of nuclear weapons is the rationalization of known strategies and irrational reactions. During the cold war in Germany, the US established the “nuclear tripwire” system where lower yield tactical nukes were to be used at the discretion of regional military commanders given an attack from a Soviet invasion force. No one wants to end the world and everyone is afraid that someone will shoot first, conventionally or strategically, in a scenario where escalation will not only be absolute and irreversible but also be based in non-rational responses. The Congressional solution also assumes that we have the ability to safeguard our politicians and that there won’t be irrational group-thinking under such stress – there’s a reason why military strategy is not a democratic process.

In my argument, I propose that checks and balances are necessary for a first-strike scenario, but irrelevant in a response scenario after receiving a nuclear attack. Sun Tzu stated that one should never fully encircle one’s foe, but leave an avenue for them to give up, and I will use that in a metaphorical sense. Deterrence strategy is all about showing rational state actors an outcome that will encircle them and thus be trapping them into a fixed detrimental outcome for all parties. The avenue of escape into peace, then, is not simply in the deterrence system, aside from some strategy adjustments and disarmament to “baseload” levels for assurances, safety, and practicality, but in the diplomacy between states and the ability for representatives thereof to give options that allow contesting parties to play the only winning move. — Dean

Political Armament: Non-Military Explanations and Willing Non-Proliferation

One model that Sagan outlines is the security model, where nuclear bombs are a sort of poker chip in international relations: strong states build up their own arsenals, and weak states ally themselves within coalitions that collectively have more bombs. Because this naturally leads to an arms race where power is represented by the quantity of nuclear bombs held, such competitive armament was described as “proliferation begets proliferation”. Next, the domestic politics model purports that politicians manipulate citizens into perceiving a threat, and scientists encourage nuclear development so that their labs receive funding. Finally, the third kind of model is the norms model, where modern countries have come to believe that in order to be considered a legitimate state, they need an arsenal in the way they need a flag or an Olympic team. It has simply become a status symbol and psychological indicator of power. All of these explanations go to show that nuclear weapons do not function simply as military tools, but rather as political levers to exert power and influence domestically and internationally.

I was particularly interested in Sagan’s analysis of South Africa as a country that gave up its nuclear arsenal. Sagan proffered that they did so because the Soviet threat to their regime diminished. I was shocked by this account and explanation, because I could not imagine that a country would give up its strongest defense system simply because an immediate threat had disappeared. It seems short sighted to assume that a country is EVER safe as long as any other country possesses nuclear weapons. Such countries seem to put a lot of faith into alliances which I personally would never assume are set in stone. I wonder, however, how much of my viewpoint has been shaped by the fact that I grew up in the United States during an era where we have viewed so many countries as threats and ourselves as the victim. If, perhaps, one did live in an under-the-radar country, one might never imagine that you might need nuclear bombs because one wouldn’t expect to ever be a target of stronger countries with powerful arsenals.

Generally, I agree with Sagan’s claim toward the beginning of the essay that it is too simplistic to assume that if states do not need to defend themselves with nuclear weapons that they will “willingly” remain non-nuclear states. In an era where nuclear weapons are seen as counterbalances to larger geopolitical power struggles, I find it hard to believe that a lack of need for nuclear weapons exists as long as any nation has access to an arsenal. While I could imagine states unwillingly remaining non-nuclear, such as if they did not have the resources or are afraid of side-effects, from a military standpoint I am jaded enough to believe that if a country could have nuclear weapons, they would. — Sarah

Out of the Shadows: Navigating Modern Nuclear Diplomacy

The Princeton Science and Global Security program’s November 2017 exhibit “Shadows and Ashes: The Peril of Nuclear Weapons” is an informational piece about modern nuclear weapon technology and the possible “catastrophic effects”—environmental, health, existential—of using this technology in a modern international conflict. The exhibit coincides with a renewed focus on nuclear policy inspired by volatile global conflicts such as the Syrian Civil War and disputes between countries, such as the United States, Russia, North Korea, India, Pakistan, Iran and Israel.

SGS officials emphasize that the chances today for a modern international dispute escalating into nuclear war are high—and may be calamitous. One graphic, for instance, shows how a modern nuclear weapon yields a detonation 28,000 times greater than the 1945 Hiroshima atomic bomb, which killed tens of thousands of people. Their exhibit shows how the number of nuclear powers has increased, while these countries modernize and maintain their arms stockpiles.

Thinking about this and other readings from the week, vis a vis Gilinsky, I’m reminded of the adage that “those who don’t know their history are doomed to repeat it.” What can be done, then, to mitigate such dangers of modern nuclear diplomacy? I agree with the exhibit that, perhaps, educating world leaders and policymakers on the present risks of nuclear destruction, the profound outcomes of the atomic bombings and the mistakes and successes of past decision making is a start. Such an approach can be part of a comprehensive program that encourages our world leaders not to trivialize the threat of using nuclear weapons.

However, the present, ongoing challenge remains for the international community to pivot the conversation away from deterrence to other policies such as détente or even abolition. The 2017 UN Treaty is a start, but I wonder how policymakers will be able to continue to work with world leaders who may be unpredictable or antagonistic. In a world of changing politics, advanced nuclear weapon technology, and proliferation, how can we constructively move forward? I look forward to learning more about these challenges, and the efforts to embrace the modern technologies, while supporting the current global health, environment, and security needs. — Jordan

Nuclear Winter: Does Anyone Care?

The article Local Nuclear War, Global Suffering discusses the possible effects of a “nuclear winter,” an event following nuclear conflict that would have a major impact on the environment and on global agriculture. Nuclear explosions are theorized to throw enormous amounts of debris and fine particles into the high atmosphere, where it is trapped for long periods of time. This layer of particles blocks sunlight, thereby lowering global temperatures and affecting crops. In addition, a nuclear winter would heat the atmosphere and cause a drastic thinning of the ozone layer, which could cause future effects on global climate in the future. In the event of even a regional conflict in northern India and Pakistan (with, as the authors propose, only 100 nuclear detonations), the resulting nuclear winter could cripple agriculture globally, and possible cause the deaths of one billion people from food shortages. Even in the case of a “local” or regional war then, the authors argue, the impact of this war can be felt in a severe way across the entire world. They recommend abolition of nuclear weapons, to preclude a nuclear winter from happening.

One question that came to my mind as I read this is whether the possibility of a nuclear winter is appropriately taken into account by political and military decision-makers in debating when and why to use nuclear weapons. I can’t say that I’ve heard much about an event like this in debates on nuclear weapons. It also seems that the people who would be most affected by hunger (the global poor) are extremely far from the decision-making process for two other nations to go to war.

Another question that I had was what precautions can be made to mitigate the effects of a nuclear winter, if one were to occur. For example, have agricultural methods and seeds been engineered in order to sustain production in this type of event? Or, are there potential ways to hasten the clearing of the atmosphere (by capturing particles early, for example)? — Jay

Welcome and Introductions

Welcome, everyone. We thought it would be a good idea to briefly introduce ourselves, and the WWS/MAE 353 team is taking the lead here. Please write a two or three sentence introduction about yourself and why you are taking this course. You can also note any questions you have after reviewing the syllabus and highlight topics that particularly stand out for you. We’d like your interests to help determine what we emphasize this semester.

Princeton and Games for Change to Partner on Virtual Reality

We are happy to announce that our team was recently awarded a sizeable grant to expand its virtual reality work together with Games for Change (G4C), a nonprofit corporation that supports the creation and distribution of digital media games for humanitarian and educational purposes. The project will build on engagement at G4C’s annual festivals, including the upcoming VR for Change Summit on August 2, which will bring together developers, storytellers, educators, and researchers using VR, AR and other immersive technologies in new ways. The grant is one of a number of projects recently awarded by the Carnegie Corporation of New York and the John D. and Catherine T. MacArthur Foundation “to support projects aimed at reducing nuclear risk through innovative and solutions-oriented approaches.”

The two-component project will employ virtual reality (VR) to support innovation, collaboration, and public awareness in nuclear arms control, with overlapping benefits to nuclear security. The first component, led by Princeton and geared toward experts, will develop full-motion VR to design and simulate new arms-control treaty verification approaches, with outputs relevant to reducing and securing weapons and fissile materials. With stalled progress toward further reductions of nuclear weapons and countries embarking on wide scale upgrades to their arsenals, building new mechanisms for cooperation in this area is essential. The VR project seeks to establish a new way for technical experts to collaborate that goes beyond the traditional exchange of ideas at conferences and workshops. It aims to offer, in particular, a way to overcome some of the confidence-building challenges that may hinder direct cooperation between countries on how to approach nuclear-weapon and fissile-material monitoring. Cooperative design and simulation exercises will seek to showcase new opportunities for state-to-state cooperation in arms control and nuclear security offered by VR. The project team aims to disseminate the findings to audiences like the Group of Governmental Experts (GGE) on Verification in Geneva through live demonstrations.

Mobilizing the public to engage with nuclear policy issues also remains a critical task for future progress. The second component, led by Games for Change, will therefore develop VR material for the public on the dangers from nuclear weapons and fissile materials. The U.S. presidential election campaign in 2016 and its aftermath has brought to the surface latent public concerns about the risks of deliberate nuclear-weapon use and even nuclear war. The aim of the VR experience will therefore be to show the risks associated with fissile-material stockpiles and large arsenals on high alert as a means to encourage greater engagement by the public in nuclear policy and decision-making. The project aims to build on the already high level of public interest in VR applications, not only for entertainment, but also for news and education applications. Established organizations are beginning to embrace the medium, resulting in more widespread public consumption of information using VR platforms. Results will be featured at the Games for Change Festivals, with the goal of engaging direct industry support for development and widespread distribution through VR app stores.

Should We Be Afraid of Superintelligence?

Like the overall risk assessments that we made in class, the rise of a collective, quality, or general superintelligence seems like an inevitable event, but I find it hard to wager whether or not building a general superintelligence will lead to a Skynet/Terminator situation, or exponential technological growth to the benefit of humanity. Bostrom believes the emergence of superintelligence will be upon us sooner or later, and that our main concern is to “engineer their motivation systems so that their preferences will coincide with ours” (Bostrom). How does one do that? Would this superintelligent entity have the capacity for empathy, or emotions? Would it see the slow-motion of the world around it and feel compassion or pity for its human creators (or not recognize this connection at all after a few iterations of AI existence), or see humans as we see lab rats or fruit flies in a laboratory?

The promise that an “intelligence explosion” contains needs to be evaluated alongside the risk of losing human control of said system. Building a specific motivation, behavior, or task into an AI system can backfire into real-life undesirable outcomes. One example cited where a machine completes its task but fails in its real-life objective is an automated vacuum cleaner whereby if it “is asked to clean up as much dirt as possible, and has an action to dump the contents of its dirt container, it will repeatedly dump and clean up the same dirt” (Russell, Dewey, and Tegmark, AI Magazine 108, referring to Russell and Norvig, 2010). Other classic examples speak of a paper-clip making robot harvesting the world’s metal to create a global supply of paperclips, destroying the world in the process. Similar to Bostrom’s concerns, Russell, Dewey, and Tegmark note the difficulties or absurdity in the idea that an AI could understand law or more nuanced standards that guide human behavior. Supposing that robots could process the laws themselves, these laws rely on an interpretation that includes “background value systems that artificial agents may lack” (Russell, Dewey, and Tegmark, 110).

If we apply these worries to a superintelligence scenario, are we really facing a dystopian world? Perhaps it depends on the type of superintelligence. Whether speed, collective, or quality, all three as described do not define one type or the other as more likely to contain or comprehend human values or at least a respect for life. Rather, the focus here is on output, speed, and cleverness. In place of morals, we have instead use the term “preferences.” Would there ever be a way to make humans the top preference, or would a quality superintelligence see through that in a nanosecond, and reject it in preservation of its own system? Even if we as a society try to prevent an intelligence explosion, per Ackerman’s argument over AI weaponry, it may be a slow, but inevitable march toward this reality given the lack of barriers to entry. On a more separate note, I am curious as to how one characterizes Ava from Ex Machina then, if she is, say, a quality superintelligence. Would such a machine insidiously blend in with society, ie play by human societal rules until it can take over? The converse would be Cyberdyne Systems’ launch of Skynet, and the resulting war between human and machine. As scarily effective Ava’s manipulation of Caleb’s emotions was, I would still prefer that kind of AI to the Terminator. Are only humans capable of morality or compassion, or is there a way to encode it, or create a pathway for AI to develop it themselves? — Nicky

Examining a “Reasoned Debate About Armed Autonomous Systems”

In the article “We should not ban ‘Killer Robots’, and here’s why”, Evan Ackerman responded to an open letter signed by over 2500 AI and robotics researchers. He argued that offensive autonomous weapons should not be banned, and research on such technology should be supported. At the end of the article, Ackerman calls special attention to the use of the term “killer robot”. He claims that some people working in the AI and robotics field have been using this term to frighten others into agreeing on banning autonomous weapons, and that we should really “call for reasoned debate about armed autonomous systems”. While I agree with him that we should not let emotion drive our debate on this topic, this might be the only one of his points that I agree with.

Ackerman’s main arguments have been very well summarized by Stuart Russell, Max Tegmark and Toby Walsh in their response to his paper, published in the same year:

“(1) Banning a weapons system is unlikely to succeed, so let’s not try. (2) Banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil. (3) The real question that we should be asking is this: Could autonomous armed robots perform more ethically than armed humans in combat? (4) What we really need, then, is a way of making autonomous armed robots ethical.”

Admittedly, human, rather than technology itself, is truly the one to blame when some technology is used for evil. But Ackerman might have failed to understand that the AI and robotics researchers are not trying to ban the technology itself, but they just want to prevent a global arm race of AI weapons before it starts. Think about biological weapons, chemists and biologists are pushing the boundary of technology further each day, and the world community is certainly supportive of that. However, we have rather successfully banned biological weapons, because they are notoriously dangerous and unethical. The same applies to autonomous weapons: AI is a fascinating field, where tons of great opportunities arises, but autonomous weapons as a subfield would not be beneficial for our world.

In addition, regarding to Ackerman’s proposition of making autonomous weapons ethical, Russell, Tegmark and Walsh have made an excellent counterargument: “how would it be easier to enforce that enemy autonomous weapons are 100 percent ethical than to enforce that they are not produced in the first place?” From what I know about artificial intelligence, no matter how “intelligent” they are, the AIs still have to follow the logic designed and enforced by human programmers. Therefore, if Ackerman wishes to make autonomous weapons ethical, he will have to make sure that no AI designers meddle with the logic and turn their robot into a cold blooded killing machine. Is that an easier task to do than simply banning all autonomous weapons? I can hardly say yes.

When I examine this reasoned debate, I definitely believe than banning autonomous weapons is an urgent and important task. As someone who wants to work with machine learning and artificial intelligence in the future, I deeply agree with this line from the original letter: “most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so.” — Yuyan

Hackers, Consumers, or Regulators: Who’s to Prevent Cyberattack?

The three pieces for Tuesday’s class present varied strategies for preventing future cyberattacks on U.S. citizens, industry, and infrastructure. O’Harrow Jr. (2012) describes a virtual training site that allows hackers the opportunity to practice defending against cyberattacks. This particular “cyber range,” operating out of New Jersey and founded by upcoming guest speaker, Ed Skoudis, is one of hundreds of sites across the country used to train government personnel to identify potential cybersecurity breaches and efficiently combat cyberattacks. Like the virtual-reality environments used in Tamara’s work to explore various nuclear verification procedures, these simulation exercises may be particularly helpful in identifying and developing better safeguards against potential cyberattacks (e.g., industry protocols, personal device security measures). However, if these simulations are primarily used to train personnel to retroactively address cyberattacks, they are not an effective mechanism of preventing the possibility of cyberattack.

Skoudis (RSA Conference 2017) suggests prevention strategies that consumers and leaders of industry may adopt in order to protect their devices (i.e., those on the “Internet of Things”) from crypto-ransomware attacks. Unlike the aforementioned expert-driven approach to combating cyberattacks, Skoudis demonstrates a grassroots approach, that educates and compels the public to engage in cyberattack prevention. While his talk does a nice job of explaining the intersection of crypto-ransomware attacks and Internet-connected devices, the specific suggestions he provides to safeguard personal devices and networks are technical and not accessible to less-technology savvy consumers (such as myself). Just as public ignorance about non-proliferation treaties will likely negate the role of the public in treaty verification, the complex and quickly-evolving technicalities associated with cybersecurity measures may make it difficult for the general public to meaningfully join in cyberattack prevention efforts.

Further, the KrebsonSecurity piece (2016) highlights that it may be impossible for consumers to change the factory-default passwords hardcoded into the firmware of their personal devices. The piece suggests that cheap, mass-produced devices (e.g., by XiongMai Technologies) are most vulnerable to Internet of Things device attacks (i.e., by Mirai malware) and will pose a risk to other consumers, industries, and infrastructure so long as they are not totally unplugged from the Internet on a wide scale. This piece recommends that some sort of industry security association be developed to publish standards and conduct audits of technology companies in order to prevent the proliferation of devices that are extremely susceptible to cybersecurity attacks. This prevention approach, if effective, would be most proactive (relative to the two previously mentioned strategies) in stopping vulnerable devices from reaching the hands of consumers. However, it is extremely difficult to imagine how this sort of regulatory agency would operate (i.e., intra or interstate) and whether any agency would have enough leverage to overcome opposition to increased industry regulation.

Ultimately, these three pieces discuss cyberattack prevention measures that require the efforts of three vastly different actors (i.e., trained government personnel, the general public, a state-run governmental agency). Whether any of these strategies is particularly feasible and/or effective (or at least more so than the others) deserves further attention. — Elisa

Does Deterrence Work Against Cyber Terrorism?

In February of 2017, Joseph Nye wrote the article Deterrence and Dissuasion in Cyberspace, in which he discusses the applicability of the concept of deterrence to the realm of cyber. Nye distinguishes the cyber realm from deterrence in the context of nuclear weapons, noting the inherent challenges that ambiguity and attribution pose in the cyber landscape. As a result, cyber actions often land in a “gray zone”, between war and peace, with the perpetrators hiding in the shadows of several remote servers. However, Nye argues that four key mechanisms, depending on the specific context of who and what, can be applied to help deter and dissuade: threats of punishment, denial, entanglement, and taboos/norms. In discussing these mechanisms, Nye argues that entanglement, such as economic interdependence, renders the threat of a surprise attack by a major state rather unlikely. Moreover, citing the example of biological and chemical weapons, Nye believes that international norms and taboos can be leveraged to increase the stigma around attacking certain types of targets in peacetime, raising the reputational costs of such an attack.

However, I remain relatively unconvinced of the ability to deter terrorists from conducting a cyber-attack. Nye admits, “As in the kinetic world, deterrence is always difficult for truly suicidal actors such as terrorists who seek religious martyrdom”, but asserts that, “thus far terrorists have used cyber more for recruitment and coordination than for destruction…At the same time, even terrorists and criminals are susceptible to deterrence by denial.” (CITE) However, the U.S. lacks much of the leverage that they wield over traditional states. That is to say, without the ability to strike back at an electrical grid, without the risk of threatening their economic dependence on the U.S. – can the U.S. credibly deter cyber-attacks from terrorist groups? Groups such as ISIS flagrantly disregard international norms, and display an affinity for utilizing the latest internet technologies.

I agree with Nye that, thus far, criminals and terrorists have opted to utilize cyber resources for coordination and recruitment, and likely at this point, ISIS lacks the technical expertise and operational capacity to execute a large-scale cyber attack. However, cyber defense has thus-far proved to be rather porous, and the number of targets is ever increasing with the Internet of Things. Moreover, similar to the rise of DIY biological engineering, a burgeoning wave of interest in the internet and computer science has emerged, diffusing knowledge across the globe. While right now, one might believe it rather unlikely that ISIS would be able to execute a cyber-attack, if they were to develop the capacity, do people believe that terrorists could be deterred from utilizing cyber-attacks? — Olivia

The Opportunities and Limits of Societal Verification

The Opportunities and Limits of Societal Verification, by Kelsey Hartigan and Corey Hinderstein, makes the case that work done by non-government parties (societal verification) has an important role to play in arms control verification. The article discusses various models for societal verification, its challenges, and how it can be utilized by governments. The article concludes that the best way for societal verification to be used by governments in arms control verification is through the use of networks of outside experts. These experts will serve as “canaries in the coal mine”, whose findings get the attention of government officials that have the final say. The article also makes the suggestion that public (open source) information should also be used by the government. However, because the article doesn’t focus on outside experts, it is vague in discussing important details of how outside experts can be utilized, how they can be helped by the government, and what are potential pitfalls of utilizing them.

The focus of the article is pretty broad. It primarily discusses opportunities for arms control verification that have arose from the popularity of the internet. Namely, a vast amount of data that is important for verification is available on the internet and this data can be accessed by many people not affiliated with the government. This is relevant for arms control since many non-government weapons experts in places like academia can easily find data, such as photos, on sensitive military equipment from traditional and social media, online. These experts can use this online data to discover arms control treaty violations and other important facts.

One example of this that the article mentions is the investigation of North Korean Transporter Erector Launchers (TELs) by the Arms Control Wonk network/blog. This was a case where academics and others outside of the government compared photos of TELs from a North Korean military parade to photos of Chinese TELs from social media to uncover the transfer of TELs from China to North Korea, in violation of sanctions. This transfer was not publicly known until it was discovered by these non-government experts.

This scenario clearly demonstrates that outside experts have important contributions to make to arms control verification. Thus it would be interesting to discuss how outside experts can be helped by the government, and what are possible downsides of using their work. However, the article choses not to focus on these issues and instead discusses seemingly less important topics.

An example of this is the subsection on “Data Management”. The subsection begins with the claim that “it will be essential to develop a framework” for data collection and dissemination in a “consistent, user friendly format”. It only becomes clear what this means when the subsection later suggests “WordPress” (a popular blogging platform and program) as a possible solution for this problem. Thus, it appears to be saying ‘blogs should be used to publicize research’. The rest of this subsection also illustrates another issue I had with the article as a whole: it uses buzzwords seemingly for the sake of using them. Specifically, the subsection adds that “Innovations in cloud computing” and advances in “big data” will help with challenges in societal verification, without discussing these challenges in any depth.

I think it would have been more useful if the article discussed the relationship between government and outside experts in greater detail. In particular there were a few topics related to this, that seem worthwhile exploring, but were not discussed.

One of these is the motivation of the outside experts. Although some outside experts are currently motivated to do societal verification, maybe more research would be done if the government provided incentives for societal verification. These incentives could be monetary, for example, by providing a reward to researcher that discover a sanctions violation. However other kinds of incentives might effectively motivate more researchers as well.

Another topic that wasn’t really discussed is the public nature of the discoveries, and the challenges this poses. Because sources are revealed in societal verification, this allows the offending government to prevent similar disclosures in the future. For example, in the TEL case discussed above, North Korea now knows not to display sanctions violating equipment in photos of military parades, since blog posts containing the pictures have revealed a sanctions violation. However, if the violation were discovered by an intelligence agency using the same sources, North Korea may never learn how their sanctions violation was discovered. Although the article does discuss techniques like censorship as one way governments can frustrate societal verification, it doesn’t really discuss this cat and mouse game aspect of societal verification. — Jonathan

The Movement Towards “Effective” Verification Mechanisms

Edward Ifft’s article examines the political dimensions of a verification system within a nuclear weapons context. In his view, there are several challenges to establishing a trusted mechanism of monitoring, verification, and compliance. First, countries with less experience in arms control agreements or with serious regional security concerns are often uneasy about the increased transparency required for a reduction in nuclear arms. Second, states that advocated for nuclear disarmament in the past may not maintain that position when they are told to give up their own arsenals. Third, the elimination of nuclear weapons may give unfair advantage to countries with conventional weapons. Fourth, verification systems must be able to constrain delivery systems and fissile material as well as nuclear warheads. Fifth, it is unclear who has the authority to resolve compliance disputes, and no consensus as to how to improve the resolution of the disputes. Finally, there are disagreements amongst nations about who should pay for these systems.

Despite these challenges, Ifft argues that attempting nuclear disarmament without an effective and trusted system of monitoring and verification can be dangerous: dishonesty throughout the disarmament process is likely, as is the risk of disputes, charges and countercharges (especially as the number of nuclear warheads decrease). There will also certainly be significant opposition to the outlines proposed by Ifft from nuclear and non-nuclear countries alike, the latter countries’ argument being that giving up nuclear weapons would decrease national security, and that the system would not be enough to guarantee the compliance of others.

Ifft offers several options for the international community: countries could recommit to eliminating their nuclear arsenals under the NPT, lay out a schedule for achieving these goals, and begin research and development into the tools necessary for effective verification. Talks on nuclear disarmament could begin amongst nuclear and nonnuclear states, and states can uphold a “zero tolerance” policy towards states that fail to comply to arms control agreements, “naming and shaming” when necessary. States can also increase the transparency of their nuclear activities, and create an international committee that uses satellites to monitor and verify nuclear disarmament.

Yet Ifft’s proposals either do not seem to address the challenges that he himself had laid out at the beginning of his article, or do not seem forceful enough to compel a meaningful change in the current international paradigm. He had argued, for instance, that countries can increase the transparency of their nuclear activities to facilitate the establishment of a verification system, yet also mentioned that this is precisely what countries are hesitant of doing due to national security concerns. Likewise, he had argued for a “naming and shaming” policy for countries that do not abide by arms control agreements, yet similar policies have hitherto not been very successful at compelling countries like North Korea and Iran to comply by international regulations. Of course, the argument could be made that his proposals could foster the initial political conditions necessary for an eventual collective international effort, though what measures should be taken afterwards is not necessarily clear. — Michael

What Does It Mean to Have Trump’s Finger on the Nuclear Button?

As Bruce Blair describes in his Politico article, the idea of a potential Donald Trump presidency inspired fear in many as to his capacity to remain calm with America’s nuclear arsenal at his fingertips. With the election in the rearview mirror and Trump in the White House, should the American public still be concerned – and if so, what should we be doing about it?

I would argue that regardless of what one thinks of Trump, the Blair article raises plenty of concerns about the U.S. nuclear launch system that should be cause for concern, or at least for fear. The president’s ability to order a nuclear strike is virtually unchecked, and for good reason – in the case of an impending strike, any hesitation in the decision-making process would almost certainly mean not only the deaths of millions of Americans, but the destruction of the military chain of command that could allow for any kind of retaliation. At the same time, such a structure increases the potential for a false alarm to turn deadly. One of President Carter’s advisors was only seconds away from telling the president of an impending Russian nuclear attack; had the Colorado detection facility not explicitly broken their time guidelines and realized their mistake, there is a real chance that human civilization may not have lived to tell the tale. Seriously, it’s that terrifying.

It is for that reason that Blair can, in my opinion, correctly argue that no president can ever truly be “capable” of handling the nuclear responsibilities of the position. Until the day that nuclear weapons are eliminated entirely, it is probably unreasonable of us to expect that anybody, regardless of how levelheaded they may seem, can “process all that he or she needs to absorb under the short deadlines imposed by warheads flying inbound at the speed of 4 miles per second.” When you combine this with the knowledge that the only “defense” for a nuclear attack is retaliation, the idea of complete nuclear disarmament starts to look a lot more attractive.

Given that disarmament is almost certainly not going to happen in the near future, however, one prudent way to assuage these fears would seem to be investing in our nuclear detection facilities and potentially rethinking what should happen in the minutes following an alert. Should the president ever be able to act on one detection facility’s alert that is not corroborated by another facility? Is having a first strike capability, which President Obama (apparently quite reluctantly) kept as policy, necessary for any reason?

Lastly, where Trump specifically comes in is in an international relations regard. As Blair observes, false alarms are relatively rare; the far more likely scenario where nuclear weapons may come into play is as the result of the escalation of a drawn-out confrontation with another nuclear power. President Trump has certainly made statements in the past that may agitate foreign powers and increase the likelihood of a conflict; at the same time, U.S./Russia relations have almost undoubtedly improved since the election, decreasing the chance of a nuclear conflict there. Moving forward, at least until nuclear disarmament becomes something that is seriously considered, I believe that the best that the American people can do is take the state of U.S. international relations seriously and demand accountability from our elected leaders. After all, the best way to avoid having our president make the wrong choice is to keep them from ever having to make it. — Ben

Is a Nuclear Warhead Sometimes Just a Nuclear Warhead?

From Cohn’s experience within a setting of “defense intellectuals,” it seems not. Instead, nuclear stockpiles are the recipient of significant phallic symbolism, valued by their proprietors as a source of vicarious strength. Both the quantity of weapons and their respective yields combine to provide substantial psychological benefits that are perhaps as great as the actual military advantages.

And yet, while Cohn makes numerous references to government memos, official weapons reports, and general deterrence rationale to reveal these sexual underpinnings, this feature is not the crux of her thesis. Cohn is instead more focused with linguistic issues as a whole, of which the sexual element is only part. While the roots of the language are important, in Cohn’s eyes they are less significant than the potential consequences of the resulting jargon. The terminology surrounding nuclear weapons is abstract and impersonal. If someone without any background on the topic were to read through the official vocabulary, the imagery he/she would construct would fall far from that which actually follows a detonation. Returning to NUKEMAP, for example, the weapon choice options consist of names such as “Little Boy”, “Gadget”, “Ivy Man”, and “Castle Bravo”. None is even somewhat descriptive of ensuing destruction.

So, the question becomes, is linguistic downplay itself a contributing factor to the persistence of nuclear weapons? The argument makes a great deal of sense. After all, how can one not become more comfortable with these weapons when they are discussed in the language of “clean bombs” and “collateral damage”?

Merging Cohn’s analysis with observations from the other readings makes these linguistic elements all the more significant and potentially worrisome. Consider, for example, Politico’s depiction of the command chain behind the issuing of a nuclear missile launch. The degree to which this power is so concentrated is remarkable. It seems that essentially at any point in the day, the president needs only to notify his military aide that he wishes to make use of the nuclear suitcase and the rest would be history. Even if the aide (or any of the subsequent officials involved) wished to intervene, they would have very little grounds on which to do so.

But Cohn’s experience makes this information all the more concerning. First, one should appreciate that Cohn draws her observations from a group of individuals who all have some academic background with nuclear weapons. In other words, even though the jargon is dominated by more benign descriptions, those who are employing it are also aware of the more explicit realities.

The same cannot be said regarding President Trump. Consider, for example, if during his nuclear briefing on the day of his inauguration, he was only instructed in the more mild collection of acronyms and terms. Whereas the experts would be certain to have a background in the gorier elements of destruction, the president may not. The impact of the linguistic elements therefore becomes more severe given the lack of formal background to cushion the abstract jargon.

Secondly, and not to make a joke of the matter, Cohn’s analysis may be particularly applicable to our current president. Though we see in Trump’s own words that he is staunchly opposed to nuclear weapon use, the personality he exhibits elsewhere makes nuclear weapons a particularly frightening realm. If the nuclear arsenal is the greatest phallic feature of them all, how would Trump handle a challenge to American nuclear capabilities? Coupling the masculine ritual facet of the weapons with a comforting abstract lingo makes the fact that this power resides in the hands of our president a bit terrifying. — Michael

When Time Is Running Out

In a November 2016 letter to the President, the President’s Council of Advisors on Science and Technology (PCAST) offers recommendations to the U.S. government for its reactions to the growing field of advanced biotechnology. While the council emphasizes the need for increasingly developed biotechnology and biosurveillance strategies, PCAST also hints at a more somber truth – once a threatening pathogen is on the loose, there isn’t much they can do.

While, through this letter, PCAST establishes recommended measures for dealing with biotechnology and the prospect of an active bioattack, its real emphasis is on prevention. As PCAST observes, “it is possible that a well-planned, well-executed attack might go unnoticed for days or weeks.” With a U.S. population of 318.9 million citizens spread across 3.797 million square miles, the brewing of a dangerous bioattack is likely to go unnoticed in its vulnerable early stages, making the detection of a pre-epidemic strand extremely difficult.

Further, the council emphasizes that the U.S.’ chances of escape from a bioattack depend on “effective detection,” “response,” and “recovery capabilities.” If a bioattack has the capability of reaching the level of an epidemic (Ro > 1), it will likely have the capability of spreading before eradication measures instilled by the government can catch up. PCAST makes the harrowing statement, “Despite recent improvements, analysis by U.S. Government agencies confirms that the pace of vaccine development and deployment remains too slow to materially affect the outcome of most plausible attacks.” According to PCAST, once a bioattack is out there, it’s very difficult, if not impossible, to reel back in. Because of the severe ramifications of a bioattack on the loose and lack of the ability for prompt eradication, PCAST highlights the need for “enhanced threat awareness” and “deterrence.”

This introduces a tough parallel – though prevention is the government’s strongest defensive measure, the thought of a raging bioattack is a frightening prospect for most citizens and politicians alike. Consequently, PCAST still issues a long-term recommendation for a development of a countermeasures program. The question PCAST faces is, how should limited government resources be best allocated when facing a faceless enemy? How much priority should be given to “recovery capabilities” rather than prevention? Perhaps, rather little. — Katherine