Iran 2014: Aiming for the Best of Imperfect Outcomes

Throughout Robert Einhorn’s report, “Preventing a Nuclear-Armed Iran,” Einhorn consistently reminds us about of the necessity of compromise and trade-offs during the ongoing 2014 negotiations toward a comprehensive nuclear agreement with Iran.

Einhorn begins by outlining the components of an ideal comprehensive agreement, highlighting three goals: (1) Ensuring and enabling early detection of breakout (2) Lengthening the breakout timeline (by reducing Iran’s capability—its enrichment levels and centrifuge numbers—such that it does not have enough time to produce sufficient fissile material before outside intervention) (3) Outlining a strong international response to breakout, which would provide effective deterrence.

Einhorn then goes on to discuss the feasibility of such an ideal agreement being reached. He admits several reasons to be skeptical about the success of these talks. First, he clarifies that the possibility of Iran breaking out is a real danger, pointing out that the debate about preventing Iran from acquiring a nuclear weapon versus developing the capability to develop a weapon is a myth; in fact, Iran already has the capability. Due to Iran’s persistent claims of a peaceful, scientific purpose, Supreme Leader Khamenei’s 2005 fatwa against nuclear weapon possession, and a lack of complete inspection information, Einhorn admits that we likely cannot ever expect to eliminate Iran’s capability or dismantle its enrichment facilities—the ideal U.S. outcome. He also recognizes that the U.S. will likely have to take a more conciliatory stance during talks, compromising on some of its goals and easing sanctions in exchange for reductions in Iran’s nuclear program. Inevitably, the negotiations will also be stalled and bombarded by strong and conflicting opinions from parties vying for different demands; some parties, like Israel, will likely only be appeased by unrealistically substantial concessions by Iran. Thus, negative press and backlash to the final agreement seems inevitable.

However, there are also reasons to be hopeful. Beginning with the negotiations for the Nov. 2013 Joint Plan of Action, Iran has been the most willing it has been in many years to come to the negotiating table. Unlike its previous non-compliance with IAEA regulations and inspections, the IAEA confirmed that Iran has indeed complied with the JPA’s reduction requirements, which lengthen the breakout timeline by a few months, and monitoring requirements. Iran’s newfound willingness to join talks was perhaps ushered in by the more moderate leadership of President Rouhani. Rouhani belongs to the camp of Iranian politicians who believe that the U.S. and Iran do have some overlapping interests, and has prioritized Iran’s economic improvement (starting with the lifting of sanctions). The JPA, while a temporary and incomplete solution, demonstrates an unprecedented level of diplomacy between Iran and the P5+1. Examining the progress through the JPA, Einhorn seems to suggest that appealing to Iran’s domestic priorities (such as its emphasis on developing a civil program, for which they would only need a portion of their current capacity, and its economy) may be an effective diplomatic strategy.

Given these factors, I pose to you a few questions:

  1. What are your predictions for the outcome of these 2014 talks? i.e. length of the talks, an extension of the JPA vs. a comprehensive agreement, which parties are dissenters or compromisers, public opinion/reaction
  2. Which of the three goals listed above (in the second paragraph) do you think the U.S. and P5+1 should prioritize and refuse to yield on for the final, comprehensive agreement? Which element is most important in preventing a nuclear-armed Iran? The most feasible to accomplish?
  3. Do you consider recent developments in negotiations with Iran, specifically the 2013 JPA, a success or failure?
  4. How should the U.S. and P5+1 go about appealing to Iran’s domestic agenda? What are Iran’s largest domestic considerations?

Ella

Credible Commitment: in Iran and in the NPT

One of the largest problems in attempts to work towards international non-proliferation is the lack of credible commitment, spurring the concern that other nations will be acting outside of agreements and skirting verification. This fear tends to undermine efforts of the Nuclear Non-Proliferation Treaty, and it certainly surfaces when discussing Iran’s nuclear research and enrichment programs.

When we examined the IAEA report on Iran for one of our problem sets, we saw that Iran was operating an underground enrichment plant that had previously gone unnoticed. Given the fact that this plant was producing up to 20% HEU, it caused a significant ripple of concern at the time of the report. Could this present an obstacle to future agreements concerning nuclear enrichment in Iran? As Einhorn writes in his report, Iran already has the technological know-how and hands-on experience to produce weapons grade uranium, and theoretically produce nuclear weapons. At this time, the United States Intelligence Community, or IC as Einhorn calls it, is uncertain whether Iran intends to pursue nuclear weapons as part of their nuclear program. This could present a problem when examining the practical needs of Iran from multiple nations’ perspectives. A nation like Israel likely does not believe that Iran needs a nuclear weapon (as they strongly oppose Iran’s possession of a weapon), and thus might limit its view of Iran’s practical needs to nuclear power, or even less nuclear activity. Conversely, Iran might include eventual potential development of a nuclear weapon in its practical needs. Even if an agreement were to be reached preventing Iran from developing a nuclear weapon at the moment, would credible commitment problems arise due to Iran’s past operation of an unknown nuclear facility? What kinds of verification techniques would the European Union and the P5+1 states require of Iran, and what would Iran consent to?

Additionally, if this credible commitment problem indeed exists regarding Iran, how would this affect the state and stability of the NPT? It was mentioned in class that, after North Korea’s exit from the NPT, a stable agreement in Iran would contribute to the strength of the NPT’s agreement. Would a failure to complete an agreement in Iran before the JPA expires signal a shift downward in the power of the NPT? Might other states decide to leave the NPT and pursue nuclear programs? What kind of commitment problem exists with the NPT, and how might it be fixed? — Nicole

Offensive Use of Cyberweapons — Yes, No?

We love victimizing ourselves in our discussions of cyberwarfare. The Chinese are attacking us. The North Koreans are attacking us. The Russians are attacking us. And we had better shore up our defenses.

But in Farwell and Rohozinski’s “The New Reality of Cyber War,” we get a different story. It was the United States that preemptively launched an offensive operation in 2010 known as the “Olympic Games,” shelling Iran with “weaponized computer codes.” These codes, which included the infamous Stuxnet cyberworm, actually succeeded in crippling Iran’s nuclear development capacity and significantly stalled the progress of the Iranian nuclear program. In short, this virtual ‘bombarding’ of Iran with computer viruses proved we would no longer need to send drones and detonate real bombs over Iranian skies to successfully undermine the capacity of Iran’s nuclear institutions.

But that’s only the beginning. Farwell tells us that in September 2011 and in May 2012, additional cyberattacks were launched against Iran presumably by the US. One particular worm, known as Flame, infected computers in Lebanon, UAE, West Bank and Iran, and gathered intelligence by recording conversations, taking screen shots, erasing information on hard discs, logging keyboard strokes, and more.

These preemptive US cyberattacks raise a few important questions.

  1. How do you feel about using cyber weapons offensively against another country with which we may not even be at war?
  2. Does a cyberattack constitute an “act of war?” — Factors to consider: 1. The UN Charter prohibits the “threat or use of force against the territorial integrity or independence of any state.” 2. You might argue that the virtual world is clearly separate from real life and no one actually dies from a cyberattack, but what if Iran, in response to a cyberattack, had responded “kinetically” and perhaps even rightfully by declaring (real) war in retaliation?
  3. On a related note, what do you think is the difference between dropping an actual bomb over Iran—which would be a clear act of war under UN definitions—and bombarding Iran with virtual “bombs” (i.e. in a cyberattack) that would induce physical damage of the machines?
  4. Did you know about American involvement in offensive cyberattacks against foreign nations? Do you think that the media’s portrayal of “cyberwarfare” and “cyberattacks” in the US is fair?

Brian

Is Cyberwar Really Inevitable?

In Gary McGraw’s work “Cyber War is Inevitable (Unless We Build Security In)”, he uses a few examples of how relatively simple it was for malware attacks to be successful and then extends that idea to conclude that our whole cyber infrastructure is vulnerable and thus we must have software security programmed in at the base level. McGraw believes that every new piece of hardware after construction must then have a software package installed that focuses on securing the hardware from external attacks.

However, his viewpoint is not one I find myself agreeing with. The Stuxnet example that McGraw seems fond of using was a targeted attack on Iran’s nuclear centrifuges by exploiting multiple zero-day bugs. McGraw also picked up on the zero-day bugs idea and presented McQueens’ et. al. conclusion that on average there are about 2500 zero-day vulnerabilities in existence on any given day. This includes those that the company has found in their own software, so this number does not mean that all of these vulnerabilities are found by malicious programmers. Considering the idea that targets for these programmers would have a sort of defense in depth scenario, some vulnerabilities that are found may not be enough to gain access for the programmer in question.

One point that I have to contest in McGraw’s work is his quote “What sometimes passes for cyber defense today – actively watching for intrusions, blocking attacks with network technologies such as firewalls, law enforcement activities, and protecting against malicious software with anti-virus technology – is little more than a cardboard shield”. I will admit that the defenses that he lists out are able to be maneuvered around: with the correct vulnerabilities, one could maneuver around firewalls and code that restricts entry to the system; law enforcement is extremely difficult due to the ability of spoofing one’s origin when sending the program; and finally most anti-virus technology is comprised of signature matching of the virus to a database, which can be prevented by using the rootkit tools available online to change the signature and prevent detection. However, these are not the only ways that are available for protection. One company called FireEye sells specialized protection hardware and software that can integrate into large company servers to scan through the whole system and detect malware, isolate it and, if needed, delete it. This is done by the FireEye system taking the program to be scanned, placing it on a virtual machine and allowing it to run while looking for malicious actions, completely different from what McGraw was mentioning in his fatalistic view of cyber defense. I believe that we are already developing the cyber defense that McGraw was mentioning in his work, but not nearly as specialized as what he demands.

Some questions I would like to pose to you:

  1. Do you believe that what we consider software security in the US is sufficient already for any foreign national attacks, i.e. cyber war?
  2. Which do you believe to be more effective in the short and long term for security: the software tailored to both functionality and security as McGraw mentioned, or the development of other methods to use security software to protect ourselves as FireEye does?
  3. Do you agree with McGraw’s idea that if cyber warfare is inevitable that the best offense would be a good defense?
  4. Considering that the Iranian’s alleged response to Stuxnet was reported to be taking control of a drone, is tracing the origin of a program as certain as McGraw claims when discounting the possibility of a first strike being effective in cyberspace?

Peter

Arms Control through Societal Verification: Invaluable or Ineffective?

In “Societal Verification: Leveraging the Information Revolution for Arms Control Verification,” authors Hinderstein and Hartigan propose a rather exciting idea: that arms control verification, like telecommunications or online shopping, could be transformed by the advent of the “Information Age.” Certainly it’s not an entirely novel concept; H&H mention the example of Internet users assisting in the analysis of vast amounts of satellite imagery for various purposes. Nor is their proposal ill timed. The authors cite the transition to fewer individual warheads as well as the need for multilateral verification as factors that will drive a greater need for verification.

Yet upon closer inspection, such an approach may not be quite as effective as it appears. The authors lay out a number of potential uses for societal verification, which consist primarily of “defining patterns,” “looking for shifts,” “identifying outliers,” “filling in blind spots,” and “detecting signals.” Of these, the ones dealing with outliers and signals would appear to be most easily applied to societal verification; informing a large group of people to be on the lookout for a specific item or activity (such as in the DARPA red balloon challenge) could be highly effective. However, establishing patterns, especially around a heavily guarded facility such as an enrichment plant, could be considerably more difficult. If our societal “informants” are to be employed in the very casual way that this approach necessitates (otherwise, we’re simply hiring less-skilled inspectors), they’re unlikely to be willing to spend the time or effort required to map out specific goings-on or movements over a long period of time. Detecting changes in these patterns would present similar problems, as well as requiring that such patterns be supplied to the informants, introducing information leakage/confidentiality concerns.

The authors’ own list of challenges that such programs would face offers yet more discouragement. Validation is a particularly worrying issue; putting one’s trust in a single report of an inconsistency or treaty violation is a dicey proposition indeed when nuclear issues hang in the balance, and while overlapping reports can partially negate this, the potential for “disinformation campaigns,” as the authors term them, seems overwhelmingly high. Indeed, the monitored country need only hire a few of its citizens to relay cross-corroborating false reports to ruin such a system. Interference is also a concern; the authors mention that in some countries, Internet access could be temporarily restricted, thwarting continual verification efforts, while in others (such as China), governments may closely track users’ online activities, and punish those who break the law. This concern in particular receives far too little consideration; if reporting on nuclear activity could be considered treason by a particularly authoritarian regime (and really, is relaying information on the military activity of your own country to a rival’s government in exchange for compensation not tantamount to espionage?), who is going to be willing to risk imprisonment or even death for what would inevitably be a very small reward? Additionally, some of the countries that the Western world is most concerned about having/acquiring nuclear weapons are so restrictive that very, very few of their citizens have access to the sort of open internet access that this method requires (North Korea is a good example of such a country.)

The above concerns should not be interpreted as a total dismissal of societal verification. Simple crowd-sourced analysis of satellite imagery has the potential to be of great value for arms verification, as does the “outlier” spotting method (provided that the aforementioned interference concerns are overcome.) Certainly, with the increased demands for verification, and the vast resources required for traditional verification approaches to meet this need, we cannot afford to overlook any potential solution. I believe that societal verification has significant potential, but we must not overlook its weaknesses.

My questions to you:

  1. Do you believe that societal verification can overcome its many challenges and become a trusted verification method?
  2. Are there novel approaches to arms control that societal verification offers that were not discussed in this paper?
  3. Would you be willing to participate in a societal verification program in your own country? Another country?
  4. Do you believe that the recent spate of online privacy concerns endangers societal verification?

Elliot

Networks of Individuals or Individual Networks?

I read the first two chapters of Duncan Watts’ book, Six Degrees. In that book he covered a lot of the concepts that we discussed in class. However, one thing that Watts brought up a couple of times in the chapters was the unreliability of the data. For example, he talked about how normal social situations don’t actually reflect random social connections. Most of the people that we know are not random people from around the world, but rather people near where I live. Because our connections are not random, this makes modeling more difficult.

Another point that I thought Watts made really well was the difference between studying the network and studying the individuals. I was wondering what you guys thought was a better way to assess networks? Personally, I that the best way to assess networks is a combination of both methods. This was best shown through the story that Watts told about the power surges in the British electrical network due to people putting their kettle on during soccer halftimes. The behavior of the people is individual, but they are all part of a network.

Also, on a (slightly related) note: In COS 126, we were discussing this same concept and the professor showed us this website: www.oracleofbacon.org. It tells you the degrees of separation between Kevin Bacon and any other actor. Enjoy! — Cara

Biosecurity: Can We Protect Ourselves?

I would like to bring up one of the readings that was not talked about last week, “Biotechnology and Biosecurity.” Although we began to talk about ways of censoring biotechnology and potentially halting some research, I think the topic of biosecurity is one worth revisiting. In a world that is quickly developing new technologies and new ways to manipulate biologics, how can we protect ourselves?

In this reading, the authors start out by noting that the world of biotechnology is developing as fast, if not faster than the computing world. The computing world gets a lot more recognition for its advances, and a lot of people in the general population fail to realize that biotechnology is developing at the same rate. One question I have for you all is: do you think there is a dearth of knowledge about biotechnology and its potential benefits/threats in the general population?

The authors point out 2 major challenges to regulating biotechnology development: 1. Biotechnology develops at a much faster rate than the rate at which any treaty would be able to be negotiated and 2. It is difficult to impose inspections on a technology that is getting smaller and smaller. Unlike the nuclear weapons that we discussed earlier in the course, biotechnology and bioweapons can be much smaller, and sometimes can leave no trace. Another major threat of biotechnology is that the technology is becoming such that anyone can replicate it, thus making it an increasingly prevalent threat. Bioweapons can be made much more cheaply and with much less technical expertise than can other weapons.

The authors note several different ways that we can address biotechnology risks, but each plan has its own challenges. (1) The authors suggest censoring the publication of biotechnology research (we talked about this on last week’s blog, so I will not go into detail). (2) Another method calls for international negotiations about restrictions that can be placed on biotechnology. However, the current international climate makes this a very unlikely possibility. (3) Another possible control mechanism relies on the scientists to self-censor their work. However, this puts undue pressure on the scientists and does not guarantee that any sort of regulation will take place and would likely lead to the stifling of knowledge flow in the scientific community. (4) Proper disease control is reliant on countries sharing knowledge and disease samples with one another. (5) We must work on disease detection and disease response in order to prepare ourselves for the possibility of bioterrorism.

The challenges to controlling bioterrorism are much greater than the challenges to controlling nuclear terrorism, as nuclear weapons are more difficult to make and are easier to regulate. However, bioterrorism is as prevalent of a threat, and policy makers are increasingly looking for ways to deal with this problem.

Some final questions I would like to ask are:

  1. Do you think any one of these strategies is better than the others?
  2. Is a combination of strategies more likely to be effective?
  3. Will none of these strategies work? Are we fighting a losing battle?
  4. Do you have any ideas for how to go about controlling bioterrorism or the spread of potentially harmful biotechnology?
  5. Do you think that bioterrorism can cause “mutually assured destruction,” as is the case with nuclear warfare?

Samantha

Is Bioterrorism a Likely Threat?

As we have read, with an appropriate bioagent and appropriate dispersal mechanism biological weapons have the potential to be very dangerous. We have also learned that the technology involved isn’t that complicated. Due to the dual use nature, these technologies are already within our reach. Yet, there have been very few examples of biological weapons use throughout history.

The 1993 Aum Shinrikyo sarin gas attacks in the Tokyo subway system was technically a chemical attack, but this case offers insight into the biological weapons question. This group spent a lot of effort and energy trying to produce a viable biological weapon, but they failed. This experience suggests many of the important challenges non-state actors face when they produce biological weapons.

First, there is a difference between explicit and tacit knowledge. The biological weapons “recipe” might look easy on a page, but it requires extensive expertise and know-how. Second, once you have an appropriate agent, you have to figure out how to effectively disseminate it. Third, resources must be efficiently allocated. This is especially challenging for a non-state actor with limited resources. These are just a few of the challenges bioweapons pose.

However, this example also shows the determination of some terrorist organizations. Aum Shinrikyo spent years on this project. And while they weren’t able to create a viable bioagent, they did manage to create a chemical weapon. This isn’t something that should be ignored. Plus they did have a whole biological weapons program in place. They just didn’t manage to create a viable pathogen. Chyba cites the fact that biological synthesis capabilities are increasing at least as fast if not faster than Moore’s Law. As biotechnologies become cheaper and more accessible there’s no saying that they will remain out of the hands of terrorists.

In a previous blog post we discussed the probability and the danger of a nuclear terrorist threat. How does the biological weapons case compare? Does the fast pace of scientific advancement make this something we should worry about? Or are bioweapons too difficult to produce and therefore terrorists will fail like Aum Shinrikyo/won’t even attempt them? — Liz

On Biotechnology Research

The National Research Council’s report, Biotechnology Research in an Age of Terrorism provides basic guidelines for the future of safe biotechnology research. It recommends educating researchers on the potential misuse of their biotechnology research (see footnote below), creating international organizations and standards (i.e. an IAEA of biotechnology), requiring oversight with “experiments of concern”, and reviewing the scientists’ publications. Since biotechnology research is “dual-use”, the report acknowledges that implemented regulations cannot hinder the advances that benefit society, particularly in relation to health.

However while it acknowledges the impacts biotechnology has on society, the report forgets the converse. As we discussed with McKenzie’s work on missile accuracy, societal factors influence the way research is conducted. Biotechnology is no exception. Biotechnology research requires a team working through trials and errors within an institutional framework. Therefore, I think the focus on the publication of sensitive information as a security without full consideration of what actually went on in the lab is founded on some incorrect assumptions.

For instance, when providing examples of potentially dangerous publications, the report references the synthesis of the poliovirus genome in the Wimmer lab. It asserts that the scientists used “standard and quite simple procedures for incorporating the IL-4 gene in to the mousepox genome” and because the methods were so standard, the publication of their process provides a “blueprint for terrorists.”

Kathleen Vogel’s article, Framing biosecurity: an alternative to the biotech revolution model?, pushes against the straightforward nature of the procedures. She notes that the results hinged on knowledge gained from years of research, and practices that the lab itself had developed. She concludes that the Wimmer experiment was “not based on cutting edge technologies, but was rooted in more evolutionary and well established laboratory practices and techniques” (Vogel). In other words, synthesizing a polio virus is not as cut and dry as the report suggests. The technological breakthrough was a product of great research and years of experience.

In my opinion, this means that the report’s recommendation to limit scientific publications, even on a level of self-governance. Because of the institutional knowledge required, we should worry less about what specific information is made public. For similar reasons, it’s hard for me to imagine ‘amateurs’ reading a paper which discusses the methods of how to synthesize a polio vaccine and then having a lethal garage-made virus the next day. In terms of bioterrorism, I would be more worried about researchers taking their experience and “going rogue”. To curb this in the future, I don’t see it being unlikely for researchers to be required to have credentials or clearances to work with certain materials.

Do you think the omission (censorship?) of certain methods is an effective tool to manage dual-use research in relation to the other recommendations? If so, who decides how the methods should be edited? The scientists? The publishers? The government? Furthermore, is there anything that you would change about the report’s recommendations? — Tori

Footnote: I believe these discussions about the potential hazards of research also include the accidental release of biotechnology. For those interests, in the field of synthetic biology, suggestions of safeguards include working with auxotrophic organisms and gene-flow barriers.

Biological Weapons: From the Invention of State-Sponsored Programs to Contemporary Bioterrorism

Jeanne Guillemin brings up important ethical questions about biological weapons in light of other weapons programs in history. She states, “the lack of use of biological weapons [is] an unsolved puzzle in the military history” (8). In order to explain this “unsolved puzzle,” she surveys a variety of factors that distinguish biological/chemical weapons from other types of weapons and concludes, “for the most of the last century […] the law and custom supported by an empowered public, technological drawbacks, widespread military disinterest, government leadership, and the reckoning of the consequences of use – have over the years reduced the risks of biological weapons, with much left to chance” (10).

In describing factors that contribute to restraints on the use of biological weapons, Guillemin emphasizes key policy makers and military commanders’ “aversion to chemical and biological weapons” (9). President Franklin Roosevelt believed that both chemical and biological weapons were “uncivilized and should never be used” (9). She also notes, “Strangely enough, Adolf Hitler, who did not hesitate at man’s murders by poison, was also averse to chemical and biological weapons” (9).

This fact raises a set of questions. What makes biological weapons more “morally repulsive” and “inhumane” than nuclear weapons? Is it rational that many policy makers and military commanders perceived biological weapons as somehow “less ethical” than nuclear weapons? Is it just a psychological repulsion attached to easily visualizable effects of people slowly suffering and dying from germs over an extended period of time?

On the other side of the debate, early advocates of biological weapons argued that chemical and biological weapons can actually be “a higher form of killing” and “a humane alternative to high explosives because they avoided battlefield blood and gore” (6). Compared to nuclear weapons, biological weapons were also in a way “advantageous because they did not destroy buildings or bridges.” (6)

I think one compelling argument for making an ethical distinction between biological weapons and nuclear weapons is that the first one is by design intended to kill civilians and destroy industrial centers of an enemy country. This is inconsistent with the “just war tradition” which mandates that non-combatants and combatants should be distinguished and the former must not be a target of military attack. Guillemin also notes that dehumanizing enemy civilians as an object to be “efficiently and predictably infected with disease” is inhumane (7). Do you guys think these reasons provide sufficient grounds to make biological weapons an ethically worse option than nuclear weapons? What about the historical precedent of using nuclear weapons against civilians during WWII?

One can also consider the distinction between biological and nuclear weapons from the perspective of relevant scientists. Guillemin poses, “how could biologists and physicians devote their energies to weapons patently aimed at civilians, with no other purpose than to kill life?” (11). As we mentioned in class, there is arguably a clear dividing line between nuclear weapons research and general scholarship for physicists, but the line is blurrier for biologists because the exact kind of medical knowledge and biotechnology required to improve the welfare of humanity, such as illuminating the prognosis of various diseases and knowing the mechanisms of certain pathogens, is intrinsically linked to developing effective biological weapons.

Taken together, what do you think are possible ethical distinctions between biological weapons and nuclear weapons? Do you think there is any? — Jean

Contagion

I first of all found Contagion a very entertaining movie. The directors did a great job balancing multiple plot lines while providing their audience with an illuminating account of the fallout from an unprecedented epidemic. Not only that, but the way the movie intertwines the human experience throughout the plot makes the film all the more realistic.

But enough of the film review. What I want to discuss in this blog article is the global severity of potential future epidemics. *Spoiler alert* at the end of the movie, we find out that the virus is caused by a presumably infected bat that flies into a pigs’ den, drops a piece of food that is eaten by a pig, and then that pig is brought to market and the infection spreads from there. Although the movie is just a movie, it doesn’t seem out of the question that diseases that already exist in nature could have the ability to mutate into dangerous microbes that could infect humans. After all, that is essentially what occurred with the H5N1 Avian Bird Flu Virus. According to the CDC, HPAI H5N1 viruses circulating among birds have evolved and are continuing to evolve into different subgroups of viruses, called “clades.” What if a pathogen similar to H5N1 mutated into something much more contagious, effecting many more humans? With the severity of this disease so great, are we prepared to handle such a crisis?

According to Contagion, we are not. The virus easily spread all over the world as the main character and “original case” travelled from Hong Kong to Chicago, then Minnesota, spreading the disease as she went. Because of the interconnectedness of the world, the speed at which the pathogen spread far outpaced the reaction from the Center for Disease Control and World Health Organization. The film also highlighted the inefficiency of government oversight- a private doctor had to go against the CDC’s orders in order to recreate the virus and construct a vaccine. Who knows how long it would have taken if the CDC had effectively kept him from further testing on the virus.

Another potential risk the film describes is the relationship between government and private pharmaceutical companies in this situation. There is an obvious conflict of interests between the government hoping to maintain societal order and private industry pursuing profit, which in this circumstance would be quite large. Jude Law, a self proclaimed conspiracy theorist, is this conflict’s best example. He pretends to be sick in order to show that a drug named Forsynthia helps cure the disease. In the end however, we find out that his plot was mainly to help investors make money as the demand for Forsynthia skyrockets.

The movie also shows that the process of creating and introducing vaccines to the public is quite slow, taking more than three months to complete and costing millions of lives. Because of the government issued quarantines and dearth of essential supplies, the rule of law erodes into nothing, creating anarchy everywhere and leading to looting as well as murder. How does the threat of novel pathogens compare to that of the global problems we have discussed in the first half of this semester? Are there links between these hazards? How can we mitigate against the possibility of future pandemics? Should we be doing more to enhance our infrastructure and safety precautions against novel diseases? — Myles

Reading Between the Iranian Lines

In a recent interview meant to reassure the international community Dr. Ali Akbar Salehi, the Head of the Atomic Energy Organization of Iran in Tehran, managed to accomplish exactly the opposite. He claimed that the recent partial interruptions to nuclear activity had been entirely voluntary and were predetermined internally. Not only this, he explicitly downplayed the role economic sanctions and negotiations, and more specifically the Geneva interim agreement, played in achieving these results. Salehi went on to extol the millennia-old achievements and virtues of the Iranian nation and effectively challenged the United States to violate the Geneva deal which temporarily lifts a set of economic sanctions, arguing that if that were to be the case, Iran would restart producing 20% enriched uranium again. Finally, he accused the IAEA (and indirectly the Israel and the U.S.) of not expressing genuine concerns, and using Iran’s nuclear activity merely as an excuse to put pressure on the Middle-Eastern country.

The tone of the interview is without a doubt very worrying. The vigor of this “us vs. the rest of the world” motive in particular is cause for concern. Many political pundits forecasted that with the election of Hassan Rouhani and, perhaps more importantly, the quiet exit of Ahmadinejad, Iranian official declarations would drastically change. However, while great strides have been made in other respects (Joint Plan of Action, easing of visa concessions, etc.), very little has changed as far as the rhetoric regarding Iran’s nuclear activity is concerned. Why do you think that is? Might it be that the Iranian administration still wishes to pander to local extremist factions or that the sanctions ultimately did have a crippling effect on the economy? Or is there some other underlying reason? — Tommaso

The Incredible Economics of Geoengineering

Barrett’s focus is the different incentives that states face with respect to adopting geoengineering programs unilaterally rather than incorporating it as part of an existing climate change policy. He takes as given that the conventional policies centered on reducing concentrations of greenhouse gases are both expensive and hindered by a free rider problem. Moreover the incentives for countries to reduce emissions are weaker than the incentives to develop and deploy geoengineering unilaterally. He calls geoengineering and emission reduction substitutes but calls for a policy framework that includes emission reduction, funding R&D into new energy technologies, and geoengineering with adaptation assistance to poorer countries. However, a lack of commitment from states who stand to benefit from climate change (in the short run) makes verification of such a three-pronged climate policy regime difficult. In general, the low costs of geoengineering make it difficult to garner commitment from states not to pursue it unilaterally. Additionally, since one country can offset more than its own greenhouse emissions through a unilateral policy verification of compliance or non-compliance becomes much more challenging.

The central question of Barrett’s article is how can we deter states from adopting unilateral geoengineering programs when the costs of doing so are so low. This is a question to which I don’t think he reaches a convincing answer. He mentions the possibility of temporary uses of geoengineering to “buy time,” effectively smoothing humps in concentrations until an international policy for stabilizing concentrations is agreed upon. However, even temporary uses of geoengineering erode the credibility of emission reduction policies. This question is further complicated by countries like China, who have benefited from climate change and whose continued growth necessitates at least current levels of greenhouse emissions. The countries most susceptible to climate change happen to be the ones least able to pursue and develop geoengineering programs. Barrett views the effective curbing of climate change as a global public good and, as such, there is a question of whether the same countries financing geoengineering projects should have the sole decision making power. This problem of governance is the greatest danger facing geoengineering policy decisions since acting unilaterally carries greater incentive than acting within an institutional framework.

Barrett’s proposed next steps call for two international institutions, the Intergovernmental Panel on Climate Change and the Framework Convention on Climate Change, to further examine the possibility of geoengineering as a viable addition to the climate change regime. He suggests mandating how and when geoengineering may be used and how any costs of such an effort should be shared. A question he leaves unanswered, and one which I am particularly interested in, is how these institutions will prevent any country’s misuse of geoengineering programs especially when the incentives in place do not encourage compliance. In other words, defecting seems highly likely at the expense of countries who lack the means to develop geoengineering programs to offset the negative externalities of high-growth economies like China. — Tyler

Climate Change: A Race Against Time? Or Already Too Late?

It’s clear to me, after reading the World Bank’s report on climate change, that humanity faces a serious problem. Not that I wasn’t aware of it before – I’d known about it ever since An Inconvenient Truth hit theaters. Rising sea levels flooding coastal areas, causing mass emigration, competition for resources, and ethnic conflict. Crop failures. Extreme weather events becoming commonplace. Extinction of species, collapses of entire ecosystems due to acidified oceans and drought. And all it takes is a 4 degrees Celsius change in the global average temperature – which is already occurring as we speak.

Climate change doesn’t have the Hollywood-esque scare factor of say, global nuclear war (unless you count the atrocity The Day After Tomorrow) – it’s a lot easier to get worked up about the mass extermination of humanity and subsequent collapse of civilization than droughts in far-off countries and people’s timeshares getting wiped out by hurricanes. And arguably, that makes it all the more dangerous. It’s slow and insidious, manifesting its effects over time – the generations which began the process will, in all likelihood, never live to see its full effects. People worry about the immediate future – it’s human nature to neglect long-term risks in order to attain short-term goals, especially when we’re not even fully sure what those risks entail. We see this demonstrated in the USA and other countries which refuse to place restrictions on emissions on grounds of not wanting to hurt economic productivity, or that it will drive up fuel costs, or a myriad of other reasons. Reinforced by a small army of fossil fuel industry lobbyists and spokespeople, the idea persists that climate change is a myth, a scam by scientists for unknown purposes, and even if it is real, to do anything about it is unthinkable. This may seem hard to believe in the rarified intellectual atmosphere of Princeton University, but polls, cable news channels, and the actions of governments and politicians verify this belief. (For anecdotal evidence, in my small, rural hometown, it’s quite common for people to remark how “global warming” can’t be real, as there’s snow on the ground, therefore it is not warmer anywhere else on the planet.)

It’s also clear, from reading the scientific literature provided, that it may be too late to prevent some of the change from occurring, even if we were to stop contributing carbon to the atmosphere completely. Various solutions have been suggested, ranging from the practical (cap and trade) to the fantastic (geoengineering, with the potential for creating a whole new set of problems,) but have yet to be implemented. Even actual efforts to cut emissions by participating countries have been lackluster. And the truth of the matter is, as it currently stands, restricting emissions would have a negative impact on the economy – any costs incurred would be passed on to consumers. But does that justify potentially destroying the future for generations to come?

What should be done about climate change? Can anything be done? More appropriately, will anything be done? Is it too late to fix the problem, and humanity will suffer due to its own apathy and ignorance? Or will mankind pull together at the last second in this particular drama, and seek to collectively reduce emissions in the same manner now done with nuclear weapons – with an eye towards the future? What do you think? — Reed