Should We Be Afraid of Superintelligence?

Like the overall risk assessments that we made in class, the rise of a collective, quality, or general superintelligence seems like an inevitable event, but I find it hard to wager whether or not building a general superintelligence will lead to a Skynet/Terminator situation, or exponential technological growth to the benefit of humanity. Bostrom believes the emergence of superintelligence will be upon us sooner or later, and that our main concern is to “engineer their motivation systems so that their preferences will coincide with ours” (Bostrom). How does one do that? Would this superintelligent entity have the capacity for empathy, or emotions? Would it see the slow-motion of the world around it and feel compassion or pity for its human creators (or not recognize this connection at all after a few iterations of AI existence), or see humans as we see lab rats or fruit flies in a laboratory?

The promise that an “intelligence explosion” contains needs to be evaluated alongside the risk of losing human control of said system. Building a specific motivation, behavior, or task into an AI system can backfire into real-life undesirable outcomes. One example cited where a machine completes its task but fails in its real-life objective is an automated vacuum cleaner whereby if it “is asked to clean up as much dirt as possible, and has an action to dump the contents of its dirt container, it will repeatedly dump and clean up the same dirt” (Russell, Dewey, and Tegmark, AI Magazine 108, referring to Russell and Norvig, 2010). Other classic examples speak of a paper-clip making robot harvesting the world’s metal to create a global supply of paperclips, destroying the world in the process. Similar to Bostrom’s concerns, Russell, Dewey, and Tegmark note the difficulties or absurdity in the idea that an AI could understand law or more nuanced standards that guide human behavior. Supposing that robots could process the laws themselves, these laws rely on an interpretation that includes “background value systems that artificial agents may lack” (Russell, Dewey, and Tegmark, 110).

If we apply these worries to a superintelligence scenario, are we really facing a dystopian world? Perhaps it depends on the type of superintelligence. Whether speed, collective, or quality, all three as described do not define one type or the other as more likely to contain or comprehend human values or at least a respect for life. Rather, the focus here is on output, speed, and cleverness. In place of morals, we have instead use the term “preferences.” Would there ever be a way to make humans the top preference, or would a quality superintelligence see through that in a nanosecond, and reject it in preservation of its own system? Even if we as a society try to prevent an intelligence explosion, per Ackerman’s argument over AI weaponry, it may be a slow, but inevitable march toward this reality given the lack of barriers to entry. On a more separate note, I am curious as to how one characterizes Ava from Ex Machina then, if she is, say, a quality superintelligence. Would such a machine insidiously blend in with society, ie play by human societal rules until it can take over? The converse would be Cyberdyne Systems’ launch of Skynet, and the resulting war between human and machine. As scarily effective Ava’s manipulation of Caleb’s emotions was, I would still prefer that kind of AI to the Terminator. Are only humans capable of morality or compassion, or is there a way to encode it, or create a pathway for AI to develop it themselves? — Nicky

Examining a “Reasoned Debate About Armed Autonomous Systems”

In the article “We should not ban ‘Killer Robots’, and here’s why”, Evan Ackerman responded to an open letter signed by over 2500 AI and robotics researchers. He argued that offensive autonomous weapons should not be banned, and research on such technology should be supported. At the end of the article, Ackerman calls special attention to the use of the term “killer robot”. He claims that some people working in the AI and robotics field have been using this term to frighten others into agreeing on banning autonomous weapons, and that we should really “call for reasoned debate about armed autonomous systems”. While I agree with him that we should not let emotion drive our debate on this topic, this might be the only one of his points that I agree with.

Ackerman’s main arguments have been very well summarized by Stuart Russell, Max Tegmark and Toby Walsh in their response to his paper, published in the same year:

“(1) Banning a weapons system is unlikely to succeed, so let’s not try. (2) Banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil. (3) The real question that we should be asking is this: Could autonomous armed robots perform more ethically than armed humans in combat? (4) What we really need, then, is a way of making autonomous armed robots ethical.”

Admittedly, human, rather than technology itself, is truly the one to blame when some technology is used for evil. But Ackerman might have failed to understand that the AI and robotics researchers are not trying to ban the technology itself, but they just want to prevent a global arm race of AI weapons before it starts. Think about biological weapons, chemists and biologists are pushing the boundary of technology further each day, and the world community is certainly supportive of that. However, we have rather successfully banned biological weapons, because they are notoriously dangerous and unethical. The same applies to autonomous weapons: AI is a fascinating field, where tons of great opportunities arises, but autonomous weapons as a subfield would not be beneficial for our world.

In addition, regarding to Ackerman’s proposition of making autonomous weapons ethical, Russell, Tegmark and Walsh have made an excellent counterargument: “how would it be easier to enforce that enemy autonomous weapons are 100 percent ethical than to enforce that they are not produced in the first place?” From what I know about artificial intelligence, no matter how “intelligent” they are, the AIs still have to follow the logic designed and enforced by human programmers. Therefore, if Ackerman wishes to make autonomous weapons ethical, he will have to make sure that no AI designers meddle with the logic and turn their robot into a cold blooded killing machine. Is that an easier task to do than simply banning all autonomous weapons? I can hardly say yes.

When I examine this reasoned debate, I definitely believe than banning autonomous weapons is an urgent and important task. As someone who wants to work with machine learning and artificial intelligence in the future, I deeply agree with this line from the original letter: “most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so.” — Yuyan

Hackers, Consumers, or Regulators: Who’s to Prevent Cyberattack?

The three pieces for Tuesday’s class present varied strategies for preventing future cyberattacks on U.S. citizens, industry, and infrastructure. O’Harrow Jr. (2012) describes a virtual training site that allows hackers the opportunity to practice defending against cyberattacks. This particular “cyber range,” operating out of New Jersey and founded by upcoming guest speaker, Ed Skoudis, is one of hundreds of sites across the country used to train government personnel to identify potential cybersecurity breaches and efficiently combat cyberattacks. Like the virtual-reality environments used in Tamara’s work to explore various nuclear verification procedures, these simulation exercises may be particularly helpful in identifying and developing better safeguards against potential cyberattacks (e.g., industry protocols, personal device security measures). However, if these simulations are primarily used to train personnel to retroactively address cyberattacks, they are not an effective mechanism of preventing the possibility of cyberattack.

Skoudis (RSA Conference 2017) suggests prevention strategies that consumers and leaders of industry may adopt in order to protect their devices (i.e., those on the “Internet of Things”) from crypto-ransomware attacks. Unlike the aforementioned expert-driven approach to combating cyberattacks, Skoudis demonstrates a grassroots approach, that educates and compels the public to engage in cyberattack prevention. While his talk does a nice job of explaining the intersection of crypto-ransomware attacks and Internet-connected devices, the specific suggestions he provides to safeguard personal devices and networks are technical and not accessible to less-technology savvy consumers (such as myself). Just as public ignorance about non-proliferation treaties will likely negate the role of the public in treaty verification, the complex and quickly-evolving technicalities associated with cybersecurity measures may make it difficult for the general public to meaningfully join in cyberattack prevention efforts.

Further, the KrebsonSecurity piece (2016) highlights that it may be impossible for consumers to change the factory-default passwords hardcoded into the firmware of their personal devices. The piece suggests that cheap, mass-produced devices (e.g., by XiongMai Technologies) are most vulnerable to Internet of Things device attacks (i.e., by Mirai malware) and will pose a risk to other consumers, industries, and infrastructure so long as they are not totally unplugged from the Internet on a wide scale. This piece recommends that some sort of industry security association be developed to publish standards and conduct audits of technology companies in order to prevent the proliferation of devices that are extremely susceptible to cybersecurity attacks. This prevention approach, if effective, would be most proactive (relative to the two previously mentioned strategies) in stopping vulnerable devices from reaching the hands of consumers. However, it is extremely difficult to imagine how this sort of regulatory agency would operate (i.e., intra or interstate) and whether any agency would have enough leverage to overcome opposition to increased industry regulation.

Ultimately, these three pieces discuss cyberattack prevention measures that require the efforts of three vastly different actors (i.e., trained government personnel, the general public, a state-run governmental agency). Whether any of these strategies is particularly feasible and/or effective (or at least more so than the others) deserves further attention. — Elisa

Does Deterrence Work Against Cyber Terrorism?

In February of 2017, Joseph Nye wrote the article Deterrence and Dissuasion in Cyberspace, in which he discusses the applicability of the concept of deterrence to the realm of cyber. Nye distinguishes the cyber realm from deterrence in the context of nuclear weapons, noting the inherent challenges that ambiguity and attribution pose in the cyber landscape. As a result, cyber actions often land in a “gray zone”, between war and peace, with the perpetrators hiding in the shadows of several remote servers. However, Nye argues that four key mechanisms, depending on the specific context of who and what, can be applied to help deter and dissuade: threats of punishment, denial, entanglement, and taboos/norms. In discussing these mechanisms, Nye argues that entanglement, such as economic interdependence, renders the threat of a surprise attack by a major state rather unlikely. Moreover, citing the example of biological and chemical weapons, Nye believes that international norms and taboos can be leveraged to increase the stigma around attacking certain types of targets in peacetime, raising the reputational costs of such an attack.

However, I remain relatively unconvinced of the ability to deter terrorists from conducting a cyber-attack. Nye admits, “As in the kinetic world, deterrence is always difficult for truly suicidal actors such as terrorists who seek religious martyrdom”, but asserts that, “thus far terrorists have used cyber more for recruitment and coordination than for destruction…At the same time, even terrorists and criminals are susceptible to deterrence by denial.” (CITE) However, the U.S. lacks much of the leverage that they wield over traditional states. That is to say, without the ability to strike back at an electrical grid, without the risk of threatening their economic dependence on the U.S. – can the U.S. credibly deter cyber-attacks from terrorist groups? Groups such as ISIS flagrantly disregard international norms, and display an affinity for utilizing the latest internet technologies.

I agree with Nye that, thus far, criminals and terrorists have opted to utilize cyber resources for coordination and recruitment, and likely at this point, ISIS lacks the technical expertise and operational capacity to execute a large-scale cyber attack. However, cyber defense has thus-far proved to be rather porous, and the number of targets is ever increasing with the Internet of Things. Moreover, similar to the rise of DIY biological engineering, a burgeoning wave of interest in the internet and computer science has emerged, diffusing knowledge across the globe. While right now, one might believe it rather unlikely that ISIS would be able to execute a cyber-attack, if they were to develop the capacity, do people believe that terrorists could be deterred from utilizing cyber-attacks? — Olivia

The Opportunities and Limits of Societal Verification

The Opportunities and Limits of Societal Verification, by Kelsey Hartigan and Corey Hinderstein, makes the case that work done by non-government parties (societal verification) has an important role to play in arms control verification. The article discusses various models for societal verification, its challenges, and how it can be utilized by governments. The article concludes that the best way for societal verification to be used by governments in arms control verification is through the use of networks of outside experts. These experts will serve as “canaries in the coal mine”, whose findings get the attention of government officials that have the final say. The article also makes the suggestion that public (open source) information should also be used by the government. However, because the article doesn’t focus on outside experts, it is vague in discussing important details of how outside experts can be utilized, how they can be helped by the government, and what are potential pitfalls of utilizing them.

The focus of the article is pretty broad. It primarily discusses opportunities for arms control verification that have arose from the popularity of the internet. Namely, a vast amount of data that is important for verification is available on the internet and this data can be accessed by many people not affiliated with the government. This is relevant for arms control since many non-government weapons experts in places like academia can easily find data, such as photos, on sensitive military equipment from traditional and social media, online. These experts can use this online data to discover arms control treaty violations and other important facts.

One example of this that the article mentions is the investigation of North Korean Transporter Erector Launchers (TELs) by the Arms Control Wonk network/blog. This was a case where academics and others outside of the government compared photos of TELs from a North Korean military parade to photos of Chinese TELs from social media to uncover the transfer of TELs from China to North Korea, in violation of sanctions. This transfer was not publicly known until it was discovered by these non-government experts.

This scenario clearly demonstrates that outside experts have important contributions to make to arms control verification. Thus it would be interesting to discuss how outside experts can be helped by the government, and what are possible downsides of using their work. However, the article choses not to focus on these issues and instead discusses seemingly less important topics.

An example of this is the subsection on “Data Management”. The subsection begins with the claim that “it will be essential to develop a framework” for data collection and dissemination in a “consistent, user friendly format”. It only becomes clear what this means when the subsection later suggests “WordPress” (a popular blogging platform and program) as a possible solution for this problem. Thus, it appears to be saying ‘blogs should be used to publicize research’. The rest of this subsection also illustrates another issue I had with the article as a whole: it uses buzzwords seemingly for the sake of using them. Specifically, the subsection adds that “Innovations in cloud computing” and advances in “big data” will help with challenges in societal verification, without discussing these challenges in any depth.

I think it would have been more useful if the article discussed the relationship between government and outside experts in greater detail. In particular there were a few topics related to this, that seem worthwhile exploring, but were not discussed.

One of these is the motivation of the outside experts. Although some outside experts are currently motivated to do societal verification, maybe more research would be done if the government provided incentives for societal verification. These incentives could be monetary, for example, by providing a reward to researcher that discover a sanctions violation. However other kinds of incentives might effectively motivate more researchers as well.

Another topic that wasn’t really discussed is the public nature of the discoveries, and the challenges this poses. Because sources are revealed in societal verification, this allows the offending government to prevent similar disclosures in the future. For example, in the TEL case discussed above, North Korea now knows not to display sanctions violating equipment in photos of military parades, since blog posts containing the pictures have revealed a sanctions violation. However, if the violation were discovered by an intelligence agency using the same sources, North Korea may never learn how their sanctions violation was discovered. Although the article does discuss techniques like censorship as one way governments can frustrate societal verification, it doesn’t really discuss this cat and mouse game aspect of societal verification. — Jonathan

The Movement Towards “Effective” Verification Mechanisms

Edward Ifft’s article examines the political dimensions of a verification system within a nuclear weapons context. In his view, there are several challenges to establishing a trusted mechanism of monitoring, verification, and compliance. First, countries with less experience in arms control agreements or with serious regional security concerns are often uneasy about the increased transparency required for a reduction in nuclear arms. Second, states that advocated for nuclear disarmament in the past may not maintain that position when they are told to give up their own arsenals. Third, the elimination of nuclear weapons may give unfair advantage to countries with conventional weapons. Fourth, verification systems must be able to constrain delivery systems and fissile material as well as nuclear warheads. Fifth, it is unclear who has the authority to resolve compliance disputes, and no consensus as to how to improve the resolution of the disputes. Finally, there are disagreements amongst nations about who should pay for these systems.

Despite these challenges, Ifft argues that attempting nuclear disarmament without an effective and trusted system of monitoring and verification can be dangerous: dishonesty throughout the disarmament process is likely, as is the risk of disputes, charges and countercharges (especially as the number of nuclear warheads decrease). There will also certainly be significant opposition to the outlines proposed by Ifft from nuclear and non-nuclear countries alike, the latter countries’ argument being that giving up nuclear weapons would decrease national security, and that the system would not be enough to guarantee the compliance of others.

Ifft offers several options for the international community: countries could recommit to eliminating their nuclear arsenals under the NPT, lay out a schedule for achieving these goals, and begin research and development into the tools necessary for effective verification. Talks on nuclear disarmament could begin amongst nuclear and nonnuclear states, and states can uphold a “zero tolerance” policy towards states that fail to comply to arms control agreements, “naming and shaming” when necessary. States can also increase the transparency of their nuclear activities, and create an international committee that uses satellites to monitor and verify nuclear disarmament.

Yet Ifft’s proposals either do not seem to address the challenges that he himself had laid out at the beginning of his article, or do not seem forceful enough to compel a meaningful change in the current international paradigm. He had argued, for instance, that countries can increase the transparency of their nuclear activities to facilitate the establishment of a verification system, yet also mentioned that this is precisely what countries are hesitant of doing due to national security concerns. Likewise, he had argued for a “naming and shaming” policy for countries that do not abide by arms control agreements, yet similar policies have hitherto not been very successful at compelling countries like North Korea and Iran to comply by international regulations. Of course, the argument could be made that his proposals could foster the initial political conditions necessary for an eventual collective international effort, though what measures should be taken afterwards is not necessarily clear. — Michael

What Does It Mean to Have Trump’s Finger on the Nuclear Button?

As Bruce Blair describes in his Politico article, the idea of a potential Donald Trump presidency inspired fear in many as to his capacity to remain calm with America’s nuclear arsenal at his fingertips. With the election in the rearview mirror and Trump in the White House, should the American public still be concerned – and if so, what should we be doing about it?

I would argue that regardless of what one thinks of Trump, the Blair article raises plenty of concerns about the U.S. nuclear launch system that should be cause for concern, or at least for fear. The president’s ability to order a nuclear strike is virtually unchecked, and for good reason – in the case of an impending strike, any hesitation in the decision-making process would almost certainly mean not only the deaths of millions of Americans, but the destruction of the military chain of command that could allow for any kind of retaliation. At the same time, such a structure increases the potential for a false alarm to turn deadly. One of President Carter’s advisors was only seconds away from telling the president of an impending Russian nuclear attack; had the Colorado detection facility not explicitly broken their time guidelines and realized their mistake, there is a real chance that human civilization may not have lived to tell the tale. Seriously, it’s that terrifying.

It is for that reason that Blair can, in my opinion, correctly argue that no president can ever truly be “capable” of handling the nuclear responsibilities of the position. Until the day that nuclear weapons are eliminated entirely, it is probably unreasonable of us to expect that anybody, regardless of how levelheaded they may seem, can “process all that he or she needs to absorb under the short deadlines imposed by warheads flying inbound at the speed of 4 miles per second.” When you combine this with the knowledge that the only “defense” for a nuclear attack is retaliation, the idea of complete nuclear disarmament starts to look a lot more attractive.

Given that disarmament is almost certainly not going to happen in the near future, however, one prudent way to assuage these fears would seem to be investing in our nuclear detection facilities and potentially rethinking what should happen in the minutes following an alert. Should the president ever be able to act on one detection facility’s alert that is not corroborated by another facility? Is having a first strike capability, which President Obama (apparently quite reluctantly) kept as policy, necessary for any reason?

Lastly, where Trump specifically comes in is in an international relations regard. As Blair observes, false alarms are relatively rare; the far more likely scenario where nuclear weapons may come into play is as the result of the escalation of a drawn-out confrontation with another nuclear power. President Trump has certainly made statements in the past that may agitate foreign powers and increase the likelihood of a conflict; at the same time, U.S./Russia relations have almost undoubtedly improved since the election, decreasing the chance of a nuclear conflict there. Moving forward, at least until nuclear disarmament becomes something that is seriously considered, I believe that the best that the American people can do is take the state of U.S. international relations seriously and demand accountability from our elected leaders. After all, the best way to avoid having our president make the wrong choice is to keep them from ever having to make it. — Ben

Is a Nuclear Warhead Sometimes Just a Nuclear Warhead?

From Cohn’s experience within a setting of “defense intellectuals,” it seems not. Instead, nuclear stockpiles are the recipient of significant phallic symbolism, valued by their proprietors as a source of vicarious strength. Both the quantity of weapons and their respective yields combine to provide substantial psychological benefits that are perhaps as great as the actual military advantages.

And yet, while Cohn makes numerous references to government memos, official weapons reports, and general deterrence rationale to reveal these sexual underpinnings, this feature is not the crux of her thesis. Cohn is instead more focused with linguistic issues as a whole, of which the sexual element is only part. While the roots of the language are important, in Cohn’s eyes they are less significant than the potential consequences of the resulting jargon. The terminology surrounding nuclear weapons is abstract and impersonal. If someone without any background on the topic were to read through the official vocabulary, the imagery he/she would construct would fall far from that which actually follows a detonation. Returning to NUKEMAP, for example, the weapon choice options consist of names such as “Little Boy”, “Gadget”, “Ivy Man”, and “Castle Bravo”. None is even somewhat descriptive of ensuing destruction.

So, the question becomes, is linguistic downplay itself a contributing factor to the persistence of nuclear weapons? The argument makes a great deal of sense. After all, how can one not become more comfortable with these weapons when they are discussed in the language of “clean bombs” and “collateral damage”?

Merging Cohn’s analysis with observations from the other readings makes these linguistic elements all the more significant and potentially worrisome. Consider, for example, Politico’s depiction of the command chain behind the issuing of a nuclear missile launch. The degree to which this power is so concentrated is remarkable. It seems that essentially at any point in the day, the president needs only to notify his military aide that he wishes to make use of the nuclear suitcase and the rest would be history. Even if the aide (or any of the subsequent officials involved) wished to intervene, they would have very little grounds on which to do so.

But Cohn’s experience makes this information all the more concerning. First, one should appreciate that Cohn draws her observations from a group of individuals who all have some academic background with nuclear weapons. In other words, even though the jargon is dominated by more benign descriptions, those who are employing it are also aware of the more explicit realities.

The same cannot be said regarding President Trump. Consider, for example, if during his nuclear briefing on the day of his inauguration, he was only instructed in the more mild collection of acronyms and terms. Whereas the experts would be certain to have a background in the gorier elements of destruction, the president may not. The impact of the linguistic elements therefore becomes more severe given the lack of formal background to cushion the abstract jargon.

Secondly, and not to make a joke of the matter, Cohn’s analysis may be particularly applicable to our current president. Though we see in Trump’s own words that he is staunchly opposed to nuclear weapon use, the personality he exhibits elsewhere makes nuclear weapons a particularly frightening realm. If the nuclear arsenal is the greatest phallic feature of them all, how would Trump handle a challenge to American nuclear capabilities? Coupling the masculine ritual facet of the weapons with a comforting abstract lingo makes the fact that this power resides in the hands of our president a bit terrifying. — Michael

When Time Is Running Out

In a November 2016 letter to the President, the President’s Council of Advisors on Science and Technology (PCAST) offers recommendations to the U.S. government for its reactions to the growing field of advanced biotechnology. While the council emphasizes the need for increasingly developed biotechnology and biosurveillance strategies, PCAST also hints at a more somber truth – once a threatening pathogen is on the loose, there isn’t much they can do.

While, through this letter, PCAST establishes recommended measures for dealing with biotechnology and the prospect of an active bioattack, its real emphasis is on prevention. As PCAST observes, “it is possible that a well-planned, well-executed attack might go unnoticed for days or weeks.” With a U.S. population of 318.9 million citizens spread across 3.797 million square miles, the brewing of a dangerous bioattack is likely to go unnoticed in its vulnerable early stages, making the detection of a pre-epidemic strand extremely difficult.

Further, the council emphasizes that the U.S.’ chances of escape from a bioattack depend on “effective detection,” “response,” and “recovery capabilities.” If a bioattack has the capability of reaching the level of an epidemic (Ro > 1), it will likely have the capability of spreading before eradication measures instilled by the government can catch up. PCAST makes the harrowing statement, “Despite recent improvements, analysis by U.S. Government agencies confirms that the pace of vaccine development and deployment remains too slow to materially affect the outcome of most plausible attacks.” According to PCAST, once a bioattack is out there, it’s very difficult, if not impossible, to reel back in. Because of the severe ramifications of a bioattack on the loose and lack of the ability for prompt eradication, PCAST highlights the need for “enhanced threat awareness” and “deterrence.”

This introduces a tough parallel – though prevention is the government’s strongest defensive measure, the thought of a raging bioattack is a frightening prospect for most citizens and politicians alike. Consequently, PCAST still issues a long-term recommendation for a development of a countermeasures program. The question PCAST faces is, how should limited government resources be best allocated when facing a faceless enemy? How much priority should be given to “recovery capabilities” rather than prevention? Perhaps, rather little. — Katherine

Reframing the Race Against Climate Change

In response to Robert H. Socolow and Alexander Glaser’s article Balancing risks: Nuclear energy & Climate Change, I found the prospect of multinational ownership of nuclear power plants to be the most intriguing, and in particular its relationship to the disarmament of nations possessing nuclear weapons. In the article’s discussion of the disarmament process, Socolow and Glaser suggested a new way to frame the nuclear debate with relation to climate change. When thinking of nuclear power purely in terms of mitigating climate change, numerous problems arise which are stated in the article: the potential for nuclear weapon proliferation, the fact that rapid nuclear expansion would lead to a crisis for storage of spent fuels, the debate over reprocessing, etc. For nuclear energy to have a significant impact on climate change the expansion must be global.

Therefore, I would like to put forth specific point made in Socolow and Glaser’s article as a critical to the argument for nuclear expansion at the same time as nuclear disarmament. They propose, “a world considerably safer for nuclear power could emerge as a co-benefit of the nuclear disarmament process” (Socolow, Glaser, 31). This description of nuclear expansion posits it as a “by-product” of the disarmament process. From this perspective, nuclear power’s ability to slow climate change would also be a “by-product.” When thinking of the desire to mitigate climate change it seems that this reframing of the debate could be extremely powerful. While it is important to set goals for climate change mitigation and prioritize it, if the debate is focused more significantly on nuclear disarmament and relating solutions such as multinational power plants, safety can remain the first and foremost priority of the nuclear power debate. This would have other benefits, for example Glaser and Socolow mentioned that another reason why countries other than current nuclear weapon holders don’t build nuclear power plants is a lack of engineers and scientists with the experience to create and run a plant. Making power plants multinational would therefore be able to help such problems and ease tensions with less developed countries.

Ultimately, nuclear power is but one “wedge” out of the many required for a real different to be made to the looming climate change. Therefore, safety should be the most important factor. Re-defining the debate as one of nuclear disarmament is one way to not overlook the most significant threats to the safety of nuclear expansion. — Mikaela

CRISPR: The Break Down of DNA & Its Ethical Dilemma

Declared at an international level, the Geneva Protocol in 1925 incredibly impacted the stance with which major powers view the use of bioterrorism. There is in a sense the fear of unpredictable spread during warfare, and easy accessibility would create a war without clear opponents – that those who have produced biological agents may be able to keep their identities concealed, or even the identity of the mutation’s agent.

I was particularly interested in the CRISPR method, which is more or less a gene editing toolkit that uses an engineered bacterial protein Cas9 to manipulate RNA to target certain DNA sequences. Several critics claim that the creation of a “gene drive” goes too far, and after reviewing other articles about CRISPR, I found that a modified mushroom and type of corn have passed under the Animal and Plant Health Inspection Service, making them the first Cas9 crops. The reasoning is that CRISPR does not qualify under regulations, which calls to question the rate at which innovation is growing and the rate at which legislation passes to critically eye innovation. While there are critics calling CRISPR products “hidden GMOs”, there is also the belief that trying to regulate CRISPR will then hurt technological growth.

The greatest fear is not just the proliferation of bio-warfare in a target area and its unpredictability to spread; even more so, if there exist gene-editing toolkits with easy accessibility like CRISPR, this leaves room for adaptation by threatening states – states not within the Geneva Protocol or any form of multilateral agreement. And even more so, products to battle sickle-cell could mutate on their own – the unpredictability of changing a DNA sequence is hazardous, especially since CRISPR still does not take into account certain ribosomes and other microorganisms which could affect the DNA sequence. From my research, there do not exist studies of the long-term effects of CRISPR on DNA sequences, especially with exposure to the carcinogens of someone’s day-to-day.

While there is fear of over-regulating the potential innovations of CRISPR and similar engineering programs, the inability to tell between a modified organism (among other modes of CRISPR’s influence) makes me believe that it would be better to regulate the gene drive; that the government should recognize these new products – perhaps not as GMOs – but with some label and way of tagging CRISPR-linked products. Though the tag may stigmatize the application of CRISPR, it would certainly act as a precautionary. — Lucas

Mitigating Shortcuts to Prevent New Disease Fronts

As depicted in fictional movies like Contagion (2011) and Outbreak (1995), new deadly viruses are cropping up via accidental interactions with nature and via purposeful scientific research. Compounded with modern modes of transportation, these deadly diseases have the lethal potential to spread worldwide overnight and become widespread pandemics that wipe out mankind.

Beginning in the last few hundred years, diseases no longer spread via predictable two-dimensional directions. The boundary of where a disease encounters and infects new victims – called the disease front – is now incredibly difficult to map as planes, trains, and automobiles can create new fronts thousands of miles away from an initial outbreak.

Before the 1700s, the size of an infected population did not really matter since the size of any disease front – like a ripple emanating from a single point – was predictable and relatively fixed in size. Because of their slow two-dimensional spread, only the most infectious diseases developed into true epidemics (even the black plague of the 14th century is considered weak as it had a slow three-year spread from southern Italy throughout Europe). Thus if a two-dimensional epidemic happened today, it would be slow and creeping, and public health officials would be able to respond to the well-defined disease front quickly.

Of course, however, modern society no longer allows for simple two-dimensional spreads. The ease and speed of transportation creates shortcuts in which viruses can break into fresh territory and create new disease fronts. Instead of fighting an outbreak in a localized area, viruses can now travel across countries and continents, creating new outbreaks, new victims, and new disease fronts.

For instance, the foot-and-mouth disease outbreak that crippled English cattle farms in 2001 did not have two-dimensional spread although that is what was expected based off how the virus usually spreads: the virus spreads between animals through direct contact, by wind-blown droplets of excrement, or by soil. A two-dimensional spread would have been expected, yet foot-and-mouth disease struck simultaneously on 43 non-neighboring farms. Modern transportation, modern livestock markets, and soil from people’s boots were all shortcuts that allowed the disease to be introduced to new victims, and suddenly animals could be infected anywhere in the nation overnight.

Shortcuts in modern transportation are in essence random, and government officials must create policies that mitigate them in order to effectively stop the spread of an epidemic in its earliest stages. English officials minimized shortcuts by eliminating livestock interaction, preemptively slaughtering nearby cattle farms, and banning travel on countryside roads. Ultimately, the discovery of lethal diseases is inevitable, so we must conduct research to better understand transportation networks and to eliminate shortcuts that will transplant diseases in fresh territory. — Delaney

Ending the Nuclear Threat

As we’ve previously discussed in this class, the specter of nuclear war haunts the world – so it prompts the question – can we ever eliminate nuclear weapons from the world? It’s an optimistic goal that has long been in the sights of activists, and as the documents from the United Nations General Assembly (L.41) and Article 36 and Reaching Critical Will demonstrate (A Treaty Banning Nuclear Weapons), they’re aspirations for intergovernmental bodies and NGOs as well. Although the Cold War is over, large stockpiles of nuclear weapons remain, posing a significant risk to the safety of the world. The only way to ensure safety moving forward, according to these documents, is for multilateral disarmament among the world’s nuclear powers. Previous treaty frameworks already allude to the eventual disarmament of nuclear weapons. The Treaty on the Non-Proliferation of Nuclear Weapons is what this group calls the “cornerstone of the nuclear non-proliferation and disarmament regime.” To that end, the Working Group of the UN General Assembly is convening a 2017 conference in New York to negotiate a legally binding instrument to prohibit nuclear weapons. Let’s hope that they’re successful in producing a legal end to nuclear weapons. However, the question remains as to whether the nuclear-armed countries will acquiesce to such a ban.

Other weapons of mass destruction like chemical and biological weapons are governed by international treaties effectively banning their use; however, nuclear weapons have no such prohibition. Right now, what’s needed most is political will among the potential signatory countries to sign, enact, and then enforce a nuclear weapons ban. It might seem like the non-nuclear armed countries have little leverage over the nuclear-armed countries, but to give an interesting example, in 1987, New Zealand passed nuclear free zone legislation, which caused the United States to suspend its military alliance with it, but eventually the United States restored its alliance anyway. Broader security concerns seem to outweigh the desire to have nuclear weapons. Furthermore, as the Article 36 and Reaching Critical Will report posits, it is also possible to move forward with a complete ban on nuclear weapons without the support of the nuclear-armed powers. Perhaps the incremental process of eliminating nuclear weapons is insufficient for achieving the real goal of a world without nuclear weapons. We have to ask ourselves – what are we willing to commit in order to achieve a nuclear-free world? — Nicholas

Bioweapons Then and Now

“Bioterrorism could kill more than nuclear war – but no one is ready to deal with it,” says Bill Gates at the recent Munich Security Conference (Washington Post, 2017). His remarks focused on the world’s governments’ relative lack of preparation to respond to any pandemic, manmade or not. Although the probabilities of either large-scale war event are low, the potential threat of a deadly biological weapon on major civilian areas is high. Even developed countries’ public health regulations and precautions could provide little defense towards a virulent, engineered microbe.

Bioweapons were originally considered in the same league as chemical weapons until germ theory and epidemiology were well understood. After use in World War I, chemical weapons faced opposition by the public and many governments around the world for its inhumane killing mechanism. The Geneva Protocol, signed 1925, prohibited chemical weapon use primarily – bioweapon use was included on the virtue of similar unconventionality. While bioweapons were ineffective for the short battlefield timescales, some saw the wartime advantages of using them to cripple enemy cities, economies, and supplies. The Protocol did not have binding restrictions nor enforcement, and states such as France and the Soviet Union pursued bioweapon research and development under intense secrecy. According to the Guillemin chapter, a few visionary scientists were responsible for advocating for and heading the state-sponsored programs in the face of adverse international treaties and public opinion. This stands in contrast with scientists’ attitudes towards nuclear weapons, who were more reluctant to aid in development after recognizing its destructive power. As a few countries developed bioweapons under secrecy, the threat of the unknown spurred other countries to adopt defensive programs to understand bioweapons. These programs gradually expanded into offensive capabilities. For example, the U.S. tested how sprayed microbials might spread in a metropolitan area by releasing a benign bacteria over San Francisco in 1950 (PBS, 2017). Fortunately, these weapons were never used and President Richard Nixon denounced them completely in 1969. Not much later, 151 parties signed the Biological Weapons Convention of 1972 which formally banned development, production, and possession of bioweapons.

Today, bioterrorism is a more likely source of biological attacks. It requires malicious intent, process know-how, and the right supplies – all of which are available. While crude nuclear devices can also be fashioned with general ease, domestic and international nuclear activity is much more closely monitored than biological research is. It would be rather difficult to regulate and restrict activities that could be precursors to bioweapons. Rather, governments may only have responsive measures to counter this form of terrorism, of which Bill Gates claims governments have not seriously considered yet. — Frank

Local Actions, Global Consequences

Given recent advances in science and technology, the state of the earth is currently teetering on the brink of widespread catastrophe. Yet, it may not even take a global nuclear war to spawn global devastation. As both Robock and Toon’s “Local Nuclear War” and the 1954 short film “The House in the Middle” emphasize, it is perhaps the regional actions that are set to be the most transformative of our global security. A local nuclear war between India and Pakistan, for example, would not only kill more than 20 million civilians in the 2 countries, but would induce climatic responses that would last for at least 10 years. As smoke from the explosion remains suspended in the stratosphere, the particles absorb so much sunlight that surface temperatures are cooled and the ozone layer depleted. Thus, a regionally produced smoke local to two countries has now induced a global climatic response that would lead to widespread famines, increased ultraviolet radiation, and shortened agricultural growing seasons.

Meanwhile, the heat effects of atomic exposure on American homes is largely dictated by the extent of local housekeeping. Two houses identical in structure and exterior condition had drastically different reactions to thermal heat wave produced in an atomic blast due to different internal housekeeping, as the house with the cluttered room burst into flames while the tidied house remained aloft. Varied external housekeeping conditions also produced varied consequences, as both a littered, unpainted house and a dry and rotten house burst into flames after exposure to thermal heat, while a house in good clean condition with a light coat of paint only had slight charring of the painted outer surface. Thus, actions as local as housekeeping can sum to larger global consequences.

Yet, humans regularly fail to have the cognitive capacity to foresee the long-term and global effects of their focal actions. When they litter or fail to paint their homes, rarely do they think that the cost of their laziness is their individual, communal, and global security in the event of an atomic explosion. Similarly, policy makers tend to put the interest of national security at the forefront of their agenda without realizing the global tradeoffs of their regional decisions. Would it be possible to convince global leaders to eliminate nuclear weapons entirely? From a scientific standpoint this seems to be the decision with the greatest positive outcome, yet from a political economic standpoint, the imminent risk of national security leads to hesitation. Perhaps global cooperation between nation states—a universal covenant to exchange national security for global security—is the ideal solution; yet, whether this is realistically feasible in a world so focused on the present seems much less certain. — Crystal