CRISPR: The Break Down of DNA & Its Ethical Dilemma

Declared at an international level, the Geneva Protocol in 1925 incredibly impacted the stance with which major powers view the use of bioterrorism. There is in a sense the fear of unpredictable spread during warfare, and easy accessibility would create a war without clear opponents – that those who have produced biological agents may be able to keep their identities concealed, or even the identity of the mutation’s agent.

I was particularly interested in the CRISPR method, which is more or less a gene editing toolkit that uses an engineered bacterial protein Cas9 to manipulate RNA to target certain DNA sequences. Several critics claim that the creation of a “gene drive” goes too far, and after reviewing other articles about CRISPR, I found that a modified mushroom and type of corn have passed under the Animal and Plant Health Inspection Service, making them the first Cas9 crops. The reasoning is that CRISPR does not qualify under regulations, which calls to question the rate at which innovation is growing and the rate at which legislation passes to critically eye innovation. While there are critics calling CRISPR products “hidden GMOs”, there is also the belief that trying to regulate CRISPR will then hurt technological growth.

The greatest fear is not just the proliferation of bio-warfare in a target area and its unpredictability to spread; even more so, if there exist gene-editing toolkits with easy accessibility like CRISPR, this leaves room for adaptation by threatening states – states not within the Geneva Protocol or any form of multilateral agreement. And even more so, products to battle sickle-cell could mutate on their own – the unpredictability of changing a DNA sequence is hazardous, especially since CRISPR still does not take into account certain ribosomes and other microorganisms which could affect the DNA sequence. From my research, there do not exist studies of the long-term effects of CRISPR on DNA sequences, especially with exposure to the carcinogens of someone’s day-to-day.

While there is fear of over-regulating the potential innovations of CRISPR and similar engineering programs, the inability to tell between a modified organism (among other modes of CRISPR’s influence) makes me believe that it would be better to regulate the gene drive; that the government should recognize these new products – perhaps not as GMOs – but with some label and way of tagging CRISPR-linked products. Though the tag may stigmatize the application of CRISPR, it would certainly act as a precautionary. — Lucas

Mitigating Shortcuts to Prevent New Disease Fronts

As depicted in fictional movies like Contagion (2011) and Outbreak (1995), new deadly viruses are cropping up via accidental interactions with nature and via purposeful scientific research. Compounded with modern modes of transportation, these deadly diseases have the lethal potential to spread worldwide overnight and become widespread pandemics that wipe out mankind.

Beginning in the last few hundred years, diseases no longer spread via predictable two-dimensional directions. The boundary of where a disease encounters and infects new victims – called the disease front – is now incredibly difficult to map as planes, trains, and automobiles can create new fronts thousands of miles away from an initial outbreak.

Before the 1700s, the size of an infected population did not really matter since the size of any disease front – like a ripple emanating from a single point – was predictable and relatively fixed in size. Because of their slow two-dimensional spread, only the most infectious diseases developed into true epidemics (even the black plague of the 14th century is considered weak as it had a slow three-year spread from southern Italy throughout Europe). Thus if a two-dimensional epidemic happened today, it would be slow and creeping, and public health officials would be able to respond to the well-defined disease front quickly.

Of course, however, modern society no longer allows for simple two-dimensional spreads. The ease and speed of transportation creates shortcuts in which viruses can break into fresh territory and create new disease fronts. Instead of fighting an outbreak in a localized area, viruses can now travel across countries and continents, creating new outbreaks, new victims, and new disease fronts.

For instance, the foot-and-mouth disease outbreak that crippled English cattle farms in 2001 did not have two-dimensional spread although that is what was expected based off how the virus usually spreads: the virus spreads between animals through direct contact, by wind-blown droplets of excrement, or by soil. A two-dimensional spread would have been expected, yet foot-and-mouth disease struck simultaneously on 43 non-neighboring farms. Modern transportation, modern livestock markets, and soil from people’s boots were all shortcuts that allowed the disease to be introduced to new victims, and suddenly animals could be infected anywhere in the nation overnight.

Shortcuts in modern transportation are in essence random, and government officials must create policies that mitigate them in order to effectively stop the spread of an epidemic in its earliest stages. English officials minimized shortcuts by eliminating livestock interaction, preemptively slaughtering nearby cattle farms, and banning travel on countryside roads. Ultimately, the discovery of lethal diseases is inevitable, so we must conduct research to better understand transportation networks and to eliminate shortcuts that will transplant diseases in fresh territory. — Delaney

Ending the Nuclear Threat

As we’ve previously discussed in this class, the specter of nuclear war haunts the world – so it prompts the question – can we ever eliminate nuclear weapons from the world? It’s an optimistic goal that has long been in the sights of activists, and as the documents from the United Nations General Assembly (L.41) and Article 36 and Reaching Critical Will demonstrate (A Treaty Banning Nuclear Weapons), they’re aspirations for intergovernmental bodies and NGOs as well. Although the Cold War is over, large stockpiles of nuclear weapons remain, posing a significant risk to the safety of the world. The only way to ensure safety moving forward, according to these documents, is for multilateral disarmament among the world’s nuclear powers. Previous treaty frameworks already allude to the eventual disarmament of nuclear weapons. The Treaty on the Non-Proliferation of Nuclear Weapons is what this group calls the “cornerstone of the nuclear non-proliferation and disarmament regime.” To that end, the Working Group of the UN General Assembly is convening a 2017 conference in New York to negotiate a legally binding instrument to prohibit nuclear weapons. Let’s hope that they’re successful in producing a legal end to nuclear weapons. However, the question remains as to whether the nuclear-armed countries will acquiesce to such a ban.

Other weapons of mass destruction like chemical and biological weapons are governed by international treaties effectively banning their use; however, nuclear weapons have no such prohibition. Right now, what’s needed most is political will among the potential signatory countries to sign, enact, and then enforce a nuclear weapons ban. It might seem like the non-nuclear armed countries have little leverage over the nuclear-armed countries, but to give an interesting example, in 1987, New Zealand passed nuclear free zone legislation, which caused the United States to suspend its military alliance with it, but eventually the United States restored its alliance anyway. Broader security concerns seem to outweigh the desire to have nuclear weapons. Furthermore, as the Article 36 and Reaching Critical Will report posits, it is also possible to move forward with a complete ban on nuclear weapons without the support of the nuclear-armed powers. Perhaps the incremental process of eliminating nuclear weapons is insufficient for achieving the real goal of a world without nuclear weapons. We have to ask ourselves – what are we willing to commit in order to achieve a nuclear-free world? — Nicholas

Bioweapons Then and Now

“Bioterrorism could kill more than nuclear war – but no one is ready to deal with it,” says Bill Gates at the recent Munich Security Conference (Washington Post, 2017). His remarks focused on the world’s governments’ relative lack of preparation to respond to any pandemic, manmade or not. Although the probabilities of either large-scale war event are low, the potential threat of a deadly biological weapon on major civilian areas is high. Even developed countries’ public health regulations and precautions could provide little defense towards a virulent, engineered microbe.

Bioweapons were originally considered in the same league as chemical weapons until germ theory and epidemiology were well understood. After use in World War I, chemical weapons faced opposition by the public and many governments around the world for its inhumane killing mechanism. The Geneva Protocol, signed 1925, prohibited chemical weapon use primarily – bioweapon use was included on the virtue of similar unconventionality. While bioweapons were ineffective for the short battlefield timescales, some saw the wartime advantages of using them to cripple enemy cities, economies, and supplies. The Protocol did not have binding restrictions nor enforcement, and states such as France and the Soviet Union pursued bioweapon research and development under intense secrecy. According to the Guillemin chapter, a few visionary scientists were responsible for advocating for and heading the state-sponsored programs in the face of adverse international treaties and public opinion. This stands in contrast with scientists’ attitudes towards nuclear weapons, who were more reluctant to aid in development after recognizing its destructive power. As a few countries developed bioweapons under secrecy, the threat of the unknown spurred other countries to adopt defensive programs to understand bioweapons. These programs gradually expanded into offensive capabilities. For example, the U.S. tested how sprayed microbials might spread in a metropolitan area by releasing a benign bacteria over San Francisco in 1950 (PBS, 2017). Fortunately, these weapons were never used and President Richard Nixon denounced them completely in 1969. Not much later, 151 parties signed the Biological Weapons Convention of 1972 which formally banned development, production, and possession of bioweapons.

Today, bioterrorism is a more likely source of biological attacks. It requires malicious intent, process know-how, and the right supplies – all of which are available. While crude nuclear devices can also be fashioned with general ease, domestic and international nuclear activity is much more closely monitored than biological research is. It would be rather difficult to regulate and restrict activities that could be precursors to bioweapons. Rather, governments may only have responsive measures to counter this form of terrorism, of which Bill Gates claims governments have not seriously considered yet. — Frank

Local Actions, Global Consequences

Given recent advances in science and technology, the state of the earth is currently teetering on the brink of widespread catastrophe. Yet, it may not even take a global nuclear war to spawn global devastation. As both Robock and Toon’s “Local Nuclear War” and the 1954 short film “The House in the Middle” emphasize, it is perhaps the regional actions that are set to be the most transformative of our global security. A local nuclear war between India and Pakistan, for example, would not only kill more than 20 million civilians in the 2 countries, but would induce climatic responses that would last for at least 10 years. As smoke from the explosion remains suspended in the stratosphere, the particles absorb so much sunlight that surface temperatures are cooled and the ozone layer depleted. Thus, a regionally produced smoke local to two countries has now induced a global climatic response that would lead to widespread famines, increased ultraviolet radiation, and shortened agricultural growing seasons.

Meanwhile, the heat effects of atomic exposure on American homes is largely dictated by the extent of local housekeeping. Two houses identical in structure and exterior condition had drastically different reactions to thermal heat wave produced in an atomic blast due to different internal housekeeping, as the house with the cluttered room burst into flames while the tidied house remained aloft. Varied external housekeeping conditions also produced varied consequences, as both a littered, unpainted house and a dry and rotten house burst into flames after exposure to thermal heat, while a house in good clean condition with a light coat of paint only had slight charring of the painted outer surface. Thus, actions as local as housekeeping can sum to larger global consequences.

Yet, humans regularly fail to have the cognitive capacity to foresee the long-term and global effects of their focal actions. When they litter or fail to paint their homes, rarely do they think that the cost of their laziness is their individual, communal, and global security in the event of an atomic explosion. Similarly, policy makers tend to put the interest of national security at the forefront of their agenda without realizing the global tradeoffs of their regional decisions. Would it be possible to convince global leaders to eliminate nuclear weapons entirely? From a scientific standpoint this seems to be the decision with the greatest positive outcome, yet from a political economic standpoint, the imminent risk of national security leads to hesitation. Perhaps global cooperation between nation states—a universal covenant to exchange national security for global security—is the ideal solution; yet, whether this is realistically feasible in a world so focused on the present seems much less certain. — Crystal

Secrecy for Security?

The work at Los Alamos was marked by an extreme level of secrecy. The town was fenced in by a barbed-wire barricade and mail was censored (Brode, Tales of Los Alamos, 1997). Bohr commented on this philosophy with Oppenheimer. It was Bohr’s belief that the results of the Trinity Test should be shared – such that nations will understand the power of the atomic bomb, and through open communication come to the conclusion that the production of atomic weapons is foolish. He advocated against secrets.

This stands in stark contrast to the view that Truman expressed in a statement immediately after Hiroshima and Nagasaki. He emphasized that, while contrary to the principles of research, scientific knowledge regarding the production and applications of atomic bombs must be kept secret, for security purposes (Statement by the President of the United States, White House Press Release, August 6, 1945).

With regards to modern-day national security, it is hard to say which view is proper. Relying on the “reasoning of men” to prevent the proliferation of weapons of mass destruction may be a naïve view of the rationality of, for example, terrorist groups. Yet, information will inevitably spread. Perhaps shared information and open discussion may be the best way to ensure the proper use of dual-use technology, and, as Bohr would assert, foster the kind of respect that emerges from open communication (Fetter-Vorm, 2012). — Mary Helen

A New Virtual Reality Presence at StudioLab

The Nuclear Futures Lab has recently established a presence at StudioLab — a new 2500 sq. ft space on campus developed by the Council on Science and Technology to bring together students, faculty and staff, independent of area of concentration, to explore the intersections and shared creativity across STEM, the arts, humanities, and social sciences. Programmatic initiatives within the StudioLab will include courses, labs, studios, research, projects, workshops and events.

The NFL recently installed its Full Motion Virtual Reality (FMVR) system in the space, which will give students and faculty new opportunities to conduct research through virtual reality. The NFL is currently using the system to design and examine new treaty verification systems and architectures for nuclear arms control. Notional facilities and weapons are built as 3D models, and when these are brought to life in FMVR, researchers are able to conduct live, immersive simulations that will help to hone effective verification options for future treaties.

studiolab2

On Autonomous Weapons

Our readings on autonomous weapons featured some very direct back and forth on the idea of banning “killer robots.” I think the issue can be split into three broad categories, focusing on the ethics of the development and use of autonomous weapons, the issues they face in international law, and the practicality of their use and prohibition.

Ethics. Gubrud raises the idea that it is contrary to the principles of a shared humanity to allow machines to determine an end to human lives. There is some value in humans making the decision the decision to kill. Opponents to this idea believe that humans killing other humans is no more ethical than robots killing humans, and that the substantive question in this issue relates to matters of practicality. Is it more ethical for a human to be the decisionmaker, and if so, is it enough reason to oppose the development of these weapons?

International Law. Gubrud also presents the argument that autonomous weapons should already be illegal under international law. He argues that robots cannot satisfy the principles of distinction and proportionality which determine just conduct in war; AI can neither reliably distinguish combatants from noncombatants nor weigh collateral damage against military gain. Ackerman opposes this view in his article, claiming that the codified Rules of Engagement are something that an AI can certainly understand and base decisions upon; Gubrud mentions the US’s “collateral damage estimation methodology”, which could serve as a base for a robot to determine proportionality. Neither side claims that the data-gathering and decision-making abilities of the technology is adequate to meet legal requirements yet; in your opinion, will it ever be? What advantages would robots have in this regard, and what challenges would you anticipate for those working on this technology?

On a different note legally, Gubrud also brings up the Martens Clause, supporting the idea that the strong public consensus against autonomous weapons can also determine the standing of autonomous weapons in international law. What role should public opinion play in this legal question, and what should be considered along with public opinion?

Practicality. There are a number of issues related to the practical implications of the development or ban of autonomous weapons.

First, would a ban even be effective? Gubrud points to an already developing international consensus for caution with the technology as a sign that a ban could develop and work, and he, Russell, Tegmark, and Walsh point to successes in banning other types of weapons. Ackerman counters by claiming that robots offer too much of a technological advantage for a state to resist and that the technology is too accessible, even to regular citizens, to effectively control. Trying to ban the tech would be a waste of effort better devoted to preventing abuse. We’ve studied weapons bans as they relate to nuclear, chemical, and biological weapons; is the issue of controlling autonomous weapons fundamentally different? What effects would a ban have on the use of robots for domestic suppression? Terrorism? Are there alternate means to prevent abuses?

Another aspect to consider will be the effect on international stability. With no emotional attachment to these robots, and little political cost for their loss, will they lead to riskier, more aggressive, and more frequent military actions? What are the prospects for an arms race featuring dozens of countries, similar to the broad interest and investment in drone technology today?

What will be the effects on consumer technology? The open letter opposing the development of autonomous weapons argues that public backlash against killer robots will hurt support for the entire fields of robotics and AI. Ackerman alludes to the idea that military research is a key driver of progress in consumer technology.

Finally, is there any aspect of the debate that these authors failed to address? — Trevor

On Superintelligence

First, for anyone who is a little lost, wants a simpler explanation, or is really interested in the topic, I found a funny, detailed blog post that has some graphics and examples that explain AI and superintelligence pretty well. (From what I can tell).

waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

It also has this graphic which I think articulates some of the ideas from Bostrom’s article in a visual way.

PPTExponentialGrowthof_Computing-1

In his article, Bostrom describes the coming of a moment in which artificial intelligence will surpass the intelligence of a human mind. This moment, Bostrom stresses, is both closer than we think and incredibly dangerous. At this point, AI will be able to improve itself and replicate and an intelligence boom will occur. The biggest question when this occurs is whether or not the goals of the AI will coincide with the goals of the human race. Bostrom hopes that such an AI will, but fears what would happen if it doesn’t.

I have several questions. First, do you buy it? Do you believe that by the time our generation is nearing death (2060-2080) AI will have become superintelligent? If so, what would the implications of such a world be? If AI is capable of performing all work, would human beings serve any real function at all?

Also, how do we make policy regarding AI? Should the government draw the line at superintelligence and only allow AI systems up to that point? Or do we encourage the responsible development of AI to any level? — Kennedy

Cyberwarfare: On Whose Authority?

So far, most covert cyber operations come from the White House in coordination with the Pentagon; most notably the Olympic Games program started under G.W. Bush and culminating with the infamous Stuxnet attack. Constitutionally, only Congress has the power to declare war. So, what constitutes an “act of war”?

According to Farwell and Rohozinski, we should look to the UN Charter. Article 2(4) prohibits the “threat or use of force against the territorial integrity or independence of any state,” and Article 51 states that nothing “in the present Charter shall impair the inherent right of individual or collective self-defense if an armed attack occurs against a Member of the United Nations” (111). Based on this logic, an act of war would occur with the use of force.

Does software code qualify as “use of force”? Farwell and Rohozinski (111-116) suggest a few elements to consider: pre-emptive/coercive action; uniformed combatants as coders; a pattern of employing cyberweapons; intent of the cyberweapon (regardless of actual impact); an evolving technological arms race; unsettling the confidence of the adversary; and disrupting momentum for an adversary’s offense. McGraw contends that a cyberattack is anything with a “kinetic” effect—that is, anything with a physical, real-world impact (112). Rid would disagree, since cyberwarfare has never caused the loss of human life (11). Could cyberattacks constitute a “use of force”? Which authority should regulate outgoing cyber engagement: Congress, the White House, the Pentagon, or the CIA?

On the receiving end, who should run offense in the attack of a private company? Private industry owns and operates 90% of US civilian critical infrastructure (Farwell and Rohozinski 110), such as financial services, public transportation, and power grids. Should a cyberattack be dealt with in domestic criminal courts, or should a higher power determine the cutoff for civilian impact? — Molly

Ever Evolving and Ever Changing: Where Do We Stand in the World of Cyber

From this week’s reading it is definitely evident that there is not one unified lens through which to view or understand cyber. Cyberwarfare, cyberesponiage, cyberattack, cyberdefense – the list is endless. Throughout the last few months we have been evaluating current issues but have had a framework to guide how we see each topic. In the world of cyber all bets are off and this makes it difficult to wrap one’s head around the realm.

To kick off the discussion I think it makes the most sense to talk about three topics in Cyberspace: (1) Who should we be scared of, (2) How could the US prepare to stop cyber threats and (3) Is cyberwar realistic?

1. Who should we actually be scared of?

A big topic in cyber surrounds the ability of non-state actors to easily and cheaply get involved in cyberwarfare. Without the need to spend billions creating bombs and nuclear weapons, non-state actors can easily wage attacks in cyberspace. According to McGraw in “Cyberwar is Inevitable” he says that most modern control systems are so poorly designed that they’re vulnerable to attacks devised over 15 years ago. Cited in Gartzke’s “Myth of Cyberwar,” Joseph Nye claims that non-state actors are a scary and real threat. Do you buy this? A non-state actor can definitely temporarily dislocate a country’s systems but Gartzke argues back that this might not actually create a lasting shift in the balance of power. To this end, is this cyberwar? Is this an effective attack? Look to the authors mentioned in Gartzke’s footnotes – Arquilla and Ronfeldt, for more nuance. Should the U.S. perceive a cyber threat as credible if it cannot be backed up with military force like Russia did with Georgia in 2007?

2. How could the US prepare to stop cyber threats?

McGraw offers a sobering reality in “Cyberwar is Inevitable” of the lack of technological expertise and security of legacy systems supporting our nation’s critical infrastructure. I personally worked in a technology capacity for the US government this past summer and was also dismayed at the lack of technical understanding by government employees. Employees themselves present one of the largest points of vulnerability for cyber attacks (look up “phishing” in which cyber attacks are administered when a government employee accidentally clinks on a sneaky malicious link). What were your thoughts on McGraw – are his arguments apt or is he just over hyping the lack of US cyber defense?

In “New Reality of Cyber War,” Farwall talks about the need for firewalls, cyber hygiene (training folks), detection technology, honey pots, and secure resilient networks. He claims that these methods are for obviously defensive purposes, but all of these mechanisms however could be portrayed to our adversaries as building offense capabilities – will this make countries like China and Russia build up their offensive capabilities in response? Will the US simply be causing an escalation and “cyber arms race.”

3. Is cyber war realistic?

Finally it is important to talk about whether cyber war is even something to be concerned about. In “There Will Never Be A Cyber War,” Rid claims that warfare relies on three criteria – violence, having a viable means to an end, and politically-motivated. He claims that in cyberspace, “no cyber offense has ever caused the loss of human life. No cyber offense has ever injured a person. No cyber attack has ever damaged a building.” Now contrast this with McGraw in “Cyber War Is Inevitable.” He speaks to the technical vulnerabilities in our power grids and financial services systems. They include exploitable “zero days” which could knock out the entire system for weeks (realistically). What damage is done to the US economy if one or more of these systems were taken out? Gartzke cites a former secretary of defense saying that there will soon be a cyber Pearl Harbor attack. To contrast these points of view I recommend looking at past examples of cyber attacks – Stuxnet and the Estonia Botnet attacks. Each is different – do either constitute war under Rid’s criteria? Is cyber war realistic? — Max

Societal Verification in the Connected Age

In Six Degrees: The Science of a Connected Age, Duncan Watts discusses the capabilities and limits of predicting and utilizing both individual and group behavior trends. Unraveling the conceptual bases of some commonly-known studies (such as the small-world method and the strength of weak ties/balance theory), Watts explores the introductory premises of aggregations and networks. In analyzing the domino effect of the Keller-Allston line failure on August 10, 1996, he opens the conversation over how individual behavior can be aggregated to collective behavior. He claims that although individual behavior is often well interpreted, collective behavior can sometimes be undeterminable through aggregation: “although genes, like people, exist as identifiably individual units, they function by interacting, and the corresponding patterns of interactions can display almost unlimited complexity” (26). Do these claims challenge or extend your perspective on previous topics of this semester such as nuclear deterrence or the prisoner’s dilemma exercise? Contextualizing these ideas into the readings from this week, how do these ideas of networks and group dynamics play into the U.S.’s application of new media and crowdsourcing into its nonproliferation strategy?

Extending Watts’ ideas into the discussion of societal verification, which application examples seem most appropriate for implementation (considering the potential benefits, effectiveness, possible consequences, and vulnerability pitfalls)? In “Societal Verification: Leveraging the Information Revolution for Arms Control Verification,” Hinderstein and Hartigan state that “‘societal verification’ refers to the concept of incorporating non-traditional stakeholders into verification and transparency regimes to increase the likelihood that violations of international commitments are detected” (1). They note several State Department recommendations such as giving citizens the ability to detect radiation spikes with the use of sensors, employing the use of quick response codes, etc (5). How would you compare these examples of societal verification to the China/North Korea example in the Lee/Lewis/Hanham piece? Are there certain uses of data analytics that are prone to be more valuable or more misleading? Do some examples jeopardize the vulnerability of citizen privacy and anonymity more than others? — Zoë

Strong and Weak Ties in the International Context

In Small Change: Why the Revolution will not be Tweeted, Gladwell discusses the importance of strong ties in making radical changes and the ultimate shortcomings of weak ties. This is an interesting view as Granovetter’s original study on strong and weak ties found that many weak ties, or casual acquaintances, prove much more critical in gaining information or pursuing opportunities, most famously in job hunting (as discussed on p.49 of Watts’s book). However, Gladwell points to the student protesters and shows that without their strong connection and ability to talk “in a way that works only with people who talk late into the night with one another” they never would have had the courage to begin the protest at the Woolworth’s in Greensboro. Of course other people with less strong connections did join the protest, they needed that initial commitment to seed their beginnings, similarly to how Ivanna relied on her friend Evan to begin her search for her Sidekick and only after initial friends helped did others join in with email support.

This seems like a very conclusive argument that only strong ties can be trusted to start important movements. However, in an international context having these very strong ties is definitely not always possible if attempting to verify an opposing state. Here, it seems as if there is often reliance on the information provided by weak ties through social media and the trust of that information, as it is easier to discover. Yet, this information can often be flawed, as Gladwell points out with Twitter’s reaction to the protests in Moldova and Tehran. These reactions may have largely overplayed the people’s involvement. This is not surprising since if only a few people post on subject, it can lead to a quick cascade where thousands not the event praise their support regardless of actual present numbers.

As both strong and weak ties have their pros and cons in international scenarios which do you think is best for governments to pursue? In this context, strong ties would likely be the classic view of intelligence agencies, which have a high entrance barrier but also high trust, and weak ties are posts on social media which provide an extremely low entrance barrier and almost no trust at all. Of course this can also extend to beginning or monitoring new social revolutions, in addition to verification of weapon systems or other governmental actions. In any of these scenarios, is it best to trust weak ties or try to implant strong ties? Would there be any practical way to combine the two for a maximum advantage? Has either become entirely worthless in the modern world? — Ben

Deterrence in the 21st Century

In “Nuclear Strategy, 1950-1990: The Search for Meaning, Thomas Nichols discusses the evolution of U.S. nuclear strategy since the beginning of the nuclear age, through 1990. While initially he describes a country unsure of how to wield its new power that defaults to a “Massive Retaliation” strategy, this soon evolves into a much more complex “Flexible Response” strategy with a focus on making escalation a certain inevitability in response to certain aggressions by Russia. After the 1970s, however, both powers had accumulated an effectively equal arsenal, complete with quick response protocols, which led to a situation of “parity.” In such a situation, the key was actual a mutual understanding of each others arsenal and response protocols that allowed for global stability, with a strategy of Mutually Assured Destruction (MAD) as it’s lynchpin. This led to a search on both sides for a way out of such a situation, through research into both missile defense as well as continued build-up of nuclear arsenals. President Bush finally established a U.S. Strategic Command (STRATCOM) that has left us in a more or less stable nuclear position in regards to Russia.

Was the build-up of nuclear weapons and evolvement of nuclear strategy a unique product of post-WWII history though? Nichols notes a number of times the ways in which U.S. leaders saw the Soviets as willing to risk their own people for military victories. How has the bipolar world in which nuclear strategy developed impacted the way we continue to conduct deterrence strategy? Now that nuclear weapons have become somewhat (if very limitedly) more widespread, what new challenges in deterrence do we face? Or does it always boil down to the two powers with overwhelmingly superior arsenals (which continues to be the U.S. and Russia)? I also found Nichols discussion of the protection of allies very interesting. Although an invasion of Europe today by any power is probably not particularly imminent, as a nuclear power what responsibility does the United States have in terms of retaliation against any country that launches nuclear weapons? The instability of many regions today makes a flexible response solution much more complex and difficult to accurately predict. But what role if any does the U.S. owe the world in terms of response and deterrence, in light of its status as both a nuclear power and global leader? — Michelle

Sex and Death in the Rational World of Defense Intellectuals

Cohn’s article on the technostrategic language of nuclear deterrence apologists is definitely one of the most intriguing articles I have read on the subject. Cohn criticizes the defense analysts that she worked with at “the Center” as being just as irrational and unrealistic as the “idealistic activists” that they are so opposed to. The very language that these defense analysts use shows “currents of homoerotic excitement, heterosexual domination, the drive toward competency and mastery, the pleasures of membership in an elite and privileged group, the ultimate importance and meaning of membership in the priesthood, and the thrilling power of becoming Death, shatterer of worlds” (717).

More compelling than Cohn’s descriptions of the content and nature of this technostrategic language, however, is her denouncement of the complete unreliability of the “abstract conceptual system” that is created by the use of this type of language (709). Cohn argues that “limited nuclear war” can only exist in an abstract system where we assume completely rational actors uninfluenced by emotions, political pressures, madness or despair. Saying that “the aggressor ends up worse off than the aggressed” can only be understood in a world where people are more concerned with the possession of nuclear weapons than the destruction and mass murder of entire cities of people. Neither one of the previous situations, however, accurately describes the global political and social structures that exist today.

But does this mean that there is no merit to nuclear deterrence theory at all? Nichols’s account of nuclear strategy during the Cold War shifted from a strategy of “Massive Retaliation” to struggling to determine extended deterrence to finally settling in to Mutually Assured Destruction (MAD). There seems to be something inherently irrational about trying to calculate limited nuclear war using mathematical models not based in reality, but what about deterrence theory and MAD? Even the political leader facing inordinate amounts of domestic pressure to start a nuclear war would hesitate to do so if he or she knew that both sides would do “unavoidable and permanent damage” to each other (27). Is there a certain lower limit above which deterrence theory’s abstract models make sense and below which they don’t?

And if the mathematical models inherent to technostrategic language are inapplicable, are there any other practical ways to speak about deterrence theory and nuclear warfare? Given that scientists speak in technostrategic languages, do we want to involve academics and professionals from less mathematically strict disciplines to refocus the reference point on damage done to human lives rather than damage done to weapons? — Jessica