The Offense-Defense Balance of Cyber

Cyber has typically been seen to have a very lop-sided offense-defense balance—with offense coming out on top. This is partly because of a function of the probability; defense must account for all possible avenues of attack but offense has to find that one single route to vulnerability. Rebecca Slayton addresses the issue of offense-defense balance in cyber by conceptualizing the issue in terms of utility—a shared feature of different modes of offense-defense balancing.

Several key insights drive her analysis. The cost of cyber operations depends not on the features of the technology alone, but also on the skills and competence of the actors and organizations that create/use/modify information technology. For example, ‘ease of use’ or ‘versatility’ of information technology seems to favor offense, but that property arises form interactions between technology and skilled actors. The operation might be quick but the construction and deployment of cyber weapons is a slow, laborious process.

Overall this implies that the utility of cyber operations differs in some serious ways. For example, a tight coupling of individual skills and information technology makes the economics of producing cyberweapons different than conventional physical weapons. The skills of the programmer have a huge effect on the efficacy and construction of the weapon. Software is continuously modified. And code takes the shape of a ‘use and lose’ weapon — once identified, it becomes obsolete. Thus, you need continued investment and skill to develop the weapons. The cost of the programmer is not accounted for in offense-defense balance analysis. The competency of managers is also important—defense failures often must do with personnel failures or out of date software. The success of offense is due to poorly managed defense. Attacks also need expensive infrastructure to be put into place—the actual attack itself might be cheap but the research and implementation of infrastructure is not. The complexity of the defense target–which increases defense costs–also increases offense costs to understand the complex system. Accessing physical effects through cyber is hard to accomplish as well. Attacking industrial control systems at a strategic point in time requires persistent communication–something hard to accomplish in such a system when deploying the cyber weapon.

A look at Stuxnet shows the high cost value of attacking–much more so than the actual defense, however the goal was considered significant enough not to quibble over the cost. The actual effect was negligible—delaying the Iran nuclear program by 3 months rather than years, whereas the cost to the US was relatively high.

I think this article raises some very interesting points about the perceived cost of offense. Often we conceive of cyber as being ‘cheap’ warfare because of the ease with which code is copied – but the constant updating and the initial conceiving of it has huge talent costs. I wouldn’t discount the high offense value of cyber necessarily though. Consider the recent situation with cyberwarfare and the 2016 US election. There was an interesting strategy taken of not directly affecting physical domains (like ICS)—instead, the focus was more so on disinformation and social media. Slayton herself acknowledges that the value of a defense target is variable in relation to the social network it is embedded in—but I think even she would pause at how to calculate the cost when it is the social network itself that is the direct target. To be sure, the disinformation cost millions to implement. Yet, the defense cost is hard to ascertain and depending on your point of view it could range from astronomical to relatively benign.

I think this also raises some questions about what constitutes a cyber offense. I have been implicitly assuming that using information technology to disseminate false information counts as an attack. The article itself focused purely on software integrity, however. Do you think that constitutes a cyber attack? If so, what are other novel ways that cyber can impact society writ large—beyond the focus of disrupting software systems. — Kabbas

Norms of Cyber Behavior

In his paper, Deterrence and Dissuasion in Cyberspace, Joseph Nye covers the challenges of deterrence in cyber warfare. Nye defines deterrence as anything that prevents an action by convincing the actors that the cost of an action outweigh its benefits (Nye 53). Nye argues this broad definition better captures the breadth of options available to states to prevent cyber attack, and he discusses four of these options, including “threat of punishment; denial by defense; entanglement and normative taboos” (Nye 46) in his paper. From these four options, Nye argues there is no “one-size-fits-all” (Nye 71) deterrence strategy for cyber attacks, and that traditional understandings of deterrence theory must adapt to respond to emerging technological threats.

The bulk of Nye’s paper is spent explaining four possible types of cyber warfare deterrence: “threat of punishment; denial by defense; entanglement; and normative taboos” (Nye 46). The first two – “threat of punishment” and “denial by defense” – fall into traditional understandings of deterrence (Nye 55). Punishment for a cyber attack could entail response in in kind, with economic sanctions, or with physical force (55). Denial by defense could entail heightened monitoring of threats and creation of cyber security, intended to convince attackers an attack was too costly to execute (57). Both these strategies are limited by the fact that the originators of cyber attacks are often anonymous (50-51) and “persistent” (57), making it difficult to respond to all potential cyber attacks effectively.

The second two deterrence strategies, “entanglement” and “normative taboos” (46), fall into a broader model of deterrence. Entanglements of modern states’ interests reduce the likelihood of attack because an attack could be detrimental to the attacker’s state as well (58). Entanglement is a particularly strong deterrent between large, economically dependent states (58). “Normative taboos” (46) reduce the likelihood of attack because an attack damages the prestige and “soft power” of the attacking state (60). Norms against attacks on civilian infrastructure may be particularly strong deterrents (61). Taken together, these four strategies could be used to prevent cyber attacks.

Of all the strategies, I was most interested in the “normative taboo” method of deterrence. Last week, we had an interesting discussion normative (“humane”/”inhumane”) constraints on bioweapons. To me, creating and enforcing norms for cyberwarfare is even more challenging, because the real-life consequences of virtual actions often feel more remote than those from real-life actions. People are often more willing to pirate a movie than steal a physical copy; kids are often more willing to bully their peers online than in person. And unlike the case of nuclear bombs or deadly pandemics, we haven’t yet seen large scale destruction from cyber attacks. I am interested to learn more about establishing cyber warfare norms from the other readings – and from all of your exciting replies! — Grace

Nukes and Germs: Comparing Nuclear Weapons and Biological Pathogens

The second half of readings for this week focus on new developments in the field of biological warfare. The Letter to the President outlines the emerging threats: cheaper, more effective technologies, a better understanding of how to use them, and the US’s inadequate defensive measures. They recommend a network of early warning systems, bolstered domestic public health capacities especially in identifying and producing responses to pathogens, monitoring of outbreaks in other countries, cooperation and aid to countries who also lack sophisticated countermeasures to either biological attacks or natural disease outbreaks.

A few things struck me about these recommendations when compared to what we’ve studied so far with nuclear weapons. Our policy aims and recommendations to deal with nuclear threats are mostly preemptive: prevent countries from acquiring weapons, and, for those that have them, reduce the chances that they’ll use them. The recommendations concerning biological weapons thought are primarily reactive though. Aside from some mentions of establishing best practices in research, the countermeasures above address useful preparation systems.

The Nouri and Chyba reading was unique in that in did recommend trying to preempt proliferation of biological agents using software design. I’m skeptical about that approach though. Aside from the fact that the paper, dated from 2009, doesn’t address CRISPR developments, I think that putting absolute faith in software updates, from a computer science perspective, seems sketchy at best. Can we really meaningfully prevent deliberate development of dangerous biological weapons that way? It seems like the biggest barrier is simply the expertise it would take to develop them successfully, which the Ledford readings implied was a rapidly shrinking roadblock.

I think the ultimate driver behind this difference is that, unlike nuclear weapons, there’s no clear bottle neck for the production of biological weapons. Moreover, it seems like the development of legitimate research technology improvements will necessarily make biological weapons easier to make. In fact, our readings about the recent CRISPR developments seem more concerned about accidents than deliberate attacks, and some scientists, in the Bohannon reading for example, implied that it would be better to figure out what’s possible than risk being caught off guard. Nuclear threats seem totally different from biological ones then. With nuclear weapons, we face a constant and pretty simple danger: either being blown up or starving in a nuclear winter. There’s also no precedent for their use in combat after WWII. On the other hand, biological weapons have been used before, as recently as 2001, and they present a mostly unknown and variable threat. Are we more afraid of the unknown they present and fear they crate than of their destructive power? It’s unclear to me exactly why the early prohibitions against them in the 20’s came about otherwise. And although most governments abandoned their biological weapons programs, it seems they did so because they weren’t as destructive or practical as they’d hoped. Can we do anything more than prepare ourselves for a biological attack or accident, and does one seem inevitable given the decentralization of potent new technologies? — Stew

Ethical Distinctions in Wartime: The Case of Biological Weapons

In the introduction and first chapter of her book Biological Weapons: From the Invention of State-Sponsored Programs to Contemporary Bioterrorism, Jeanne Guillemin traces the history of biological weapons programs from their inception with French research in the 1920s all the way through to the 21st century. In order to frame key developments in the realm of biological warfare, Guillemin splits the era into three historical phases: an “offensive phase” when both production and possession of biological weapons were legitimate and widely practiced (roughly 1920-1972), a later period of total prohibition based on international law coming out of the Biological Weapons Convention (1972-early 1990s), and a third defensive stage following the end of the Cold War characterized by “tension between national and international security objectives.”

In clarifying the significant differences between chemical and biological weapons, Guillemin calls upon the Rosebury-Kabat report published in 1942, noting six unique features of biological weapons among which are their delayed effects, contagious nature, and dependence on a mammal host for virulence. Despite their many differences, both chemical and biological weapons underwent a similar timeline with regard to shifts in public perception. Early in their developmental history, both were seen by many advocates to be more humane than conventional arms, as they “avoided battlefield blood and gore,” thereby constituting a “higher form of killing.” Public opinion rapidly shifted, however, after horror stories covering the use of chemical weapons in World War I made their way home and influenced the 1925 Geneva Protocol which banned the use (but not the production or possession) of chemical or biological weapons.

This progression of public opinion to characterize some weapons as inhumane and others as totally legitimate raised several questions for me during my reading of Guillemin. I felt that this distinction could at times appear quite arbitrary, particularly in the case of U.S. policy during World War II. FDR himself, according to Guillemin, felt strongly that chemical and biological weapons were “uncivilized and should never be used,” an interesting sentiment coming from the man who would ordain creation of the most destructive weapon the world had ever seen. I wonder how we are meant to set internally consistent distinctions of “humane” versus “inhumane” weapons of war. Is it a matter of scale? One of suffering? Perhaps of physical detachment on the part of the aggressor (as can be seen in the current debate on drone use)? Should the 20th century doctrine of “total war” which “blurred the lines between enemy soldiers and civilians” persist into the 21st, or do the complexities of modern warfare merit a clear moral distinction between the two? What truly qualifies as “mass destruction,” and how does that label at once delegitimize some avenues of warfare while solidifying the validity of others? — Wesley

“Strange Game … The Only Winning Move is Not to Play.”

For all of us who have watched the movie Wargames, we remember the iconic final scene in which a computer analyzes every possible scenario concerning all-out nuclear war. Bruce Blair’s article on strengthening checks on presidential authority draws valid points about our current response structure to a nuclear attack. In short, a considerable amount pressure is placed on the president and his closest advisors in a very narrow window of time as to what strategy the US would implement in the response given a multitude of contingencies offered by military strategists. Bruce Blair, as the co-founder of Global Zero, is against this notion that such world-ending power should be allowed in the hands of such few individuals including our current president (or any). For his proposed solution, he optimized democratic checks and balances upon the president through Congress whilst minimizing “use it or lose it” forces such as missile silos, making the US response one that would take deliberation concerning legality, ethics, and logical consideration. Alongside this is a push to eliminate land-based missile silos and remove nuclear strikes as a counter to non-nuclear threats.

While I agree that basing the US’s nuclear strategy upon subs and other mobile launch platforms, I can see the logic of maintaining a “base-load” of silos as a deterrent in itself. What is your incentive for nuclear war if you know that your opponent is willing to use their nukes rather than lose them. It is nearly a guaranteed reaction, as opposed to a deliberated response which would increase the time to decision and quite possibly have ethical issues as the time-horizon of justified M.A.D. passes with every minute. In a sense, the peace brought by the age of nuclear weapons is the rationalization of known strategies and irrational reactions. During the cold war in Germany, the US established the “nuclear tripwire” system where lower yield tactical nukes were to be used at the discretion of regional military commanders given an attack from a Soviet invasion force. No one wants to end the world and everyone is afraid that someone will shoot first, conventionally or strategically, in a scenario where escalation will not only be absolute and irreversible but also be based in non-rational responses. The Congressional solution also assumes that we have the ability to safeguard our politicians and that there won’t be irrational group-thinking under such stress – there’s a reason why military strategy is not a democratic process.

In my argument, I propose that checks and balances are necessary for a first-strike scenario, but irrelevant in a response scenario after receiving a nuclear attack. Sun Tzu stated that one should never fully encircle one’s foe, but leave an avenue for them to give up, and I will use that in a metaphorical sense. Deterrence strategy is all about showing rational state actors an outcome that will encircle them and thus be trapping them into a fixed detrimental outcome for all parties. The avenue of escape into peace, then, is not simply in the deterrence system, aside from some strategy adjustments and disarmament to “baseload” levels for assurances, safety, and practicality, but in the diplomacy between states and the ability for representatives thereof to give options that allow contesting parties to play the only winning move. — Dean

Political Armament: Non-Military Explanations and Willing Non-Proliferation

One model that Sagan outlines is the security model, where nuclear bombs are a sort of poker chip in international relations: strong states build up their own arsenals, and weak states ally themselves within coalitions that collectively have more bombs. Because this naturally leads to an arms race where power is represented by the quantity of nuclear bombs held, such competitive armament was described as “proliferation begets proliferation”. Next, the domestic politics model purports that politicians manipulate citizens into perceiving a threat, and scientists encourage nuclear development so that their labs receive funding. Finally, the third kind of model is the norms model, where modern countries have come to believe that in order to be considered a legitimate state, they need an arsenal in the way they need a flag or an Olympic team. It has simply become a status symbol and psychological indicator of power. All of these explanations go to show that nuclear weapons do not function simply as military tools, but rather as political levers to exert power and influence domestically and internationally.

I was particularly interested in Sagan’s analysis of South Africa as a country that gave up its nuclear arsenal. Sagan proffered that they did so because the Soviet threat to their regime diminished. I was shocked by this account and explanation, because I could not imagine that a country would give up its strongest defense system simply because an immediate threat had disappeared. It seems short sighted to assume that a country is EVER safe as long as any other country possesses nuclear weapons. Such countries seem to put a lot of faith into alliances which I personally would never assume are set in stone. I wonder, however, how much of my viewpoint has been shaped by the fact that I grew up in the United States during an era where we have viewed so many countries as threats and ourselves as the victim. If, perhaps, one did live in an under-the-radar country, one might never imagine that you might need nuclear bombs because one wouldn’t expect to ever be a target of stronger countries with powerful arsenals.

Generally, I agree with Sagan’s claim toward the beginning of the essay that it is too simplistic to assume that if states do not need to defend themselves with nuclear weapons that they will “willingly” remain non-nuclear states. In an era where nuclear weapons are seen as counterbalances to larger geopolitical power struggles, I find it hard to believe that a lack of need for nuclear weapons exists as long as any nation has access to an arsenal. While I could imagine states unwillingly remaining non-nuclear, such as if they did not have the resources or are afraid of side-effects, from a military standpoint I am jaded enough to believe that if a country could have nuclear weapons, they would. — Sarah

Out of the Shadows: Navigating Modern Nuclear Diplomacy

The Princeton Science and Global Security program’s November 2017 exhibit “Shadows and Ashes: The Peril of Nuclear Weapons” is an informational piece about modern nuclear weapon technology and the possible “catastrophic effects”—environmental, health, existential—of using this technology in a modern international conflict. The exhibit coincides with a renewed focus on nuclear policy inspired by volatile global conflicts such as the Syrian Civil War and disputes between countries, such as the United States, Russia, North Korea, India, Pakistan, Iran and Israel.

SGS officials emphasize that the chances today for a modern international dispute escalating into nuclear war are high—and may be calamitous. One graphic, for instance, shows how a modern nuclear weapon yields a detonation 28,000 times greater than the 1945 Hiroshima atomic bomb, which killed tens of thousands of people. Their exhibit shows how the number of nuclear powers has increased, while these countries modernize and maintain their arms stockpiles.

Thinking about this and other readings from the week, vis a vis Gilinsky, I’m reminded of the adage that “those who don’t know their history are doomed to repeat it.” What can be done, then, to mitigate such dangers of modern nuclear diplomacy? I agree with the exhibit that, perhaps, educating world leaders and policymakers on the present risks of nuclear destruction, the profound outcomes of the atomic bombings and the mistakes and successes of past decision making is a start. Such an approach can be part of a comprehensive program that encourages our world leaders not to trivialize the threat of using nuclear weapons.

However, the present, ongoing challenge remains for the international community to pivot the conversation away from deterrence to other policies such as détente or even abolition. The 2017 UN Treaty is a start, but I wonder how policymakers will be able to continue to work with world leaders who may be unpredictable or antagonistic. In a world of changing politics, advanced nuclear weapon technology, and proliferation, how can we constructively move forward? I look forward to learning more about these challenges, and the efforts to embrace the modern technologies, while supporting the current global health, environment, and security needs. — Jordan

Nuclear Winter: Does Anyone Care?

The article Local Nuclear War, Global Suffering discusses the possible effects of a “nuclear winter,” an event following nuclear conflict that would have a major impact on the environment and on global agriculture. Nuclear explosions are theorized to throw enormous amounts of debris and fine particles into the high atmosphere, where it is trapped for long periods of time. This layer of particles blocks sunlight, thereby lowering global temperatures and affecting crops. In addition, a nuclear winter would heat the atmosphere and cause a drastic thinning of the ozone layer, which could cause future effects on global climate in the future. In the event of even a regional conflict in northern India and Pakistan (with, as the authors propose, only 100 nuclear detonations), the resulting nuclear winter could cripple agriculture globally, and possible cause the deaths of one billion people from food shortages. Even in the case of a “local” or regional war then, the authors argue, the impact of this war can be felt in a severe way across the entire world. They recommend abolition of nuclear weapons, to preclude a nuclear winter from happening.

One question that came to my mind as I read this is whether the possibility of a nuclear winter is appropriately taken into account by political and military decision-makers in debating when and why to use nuclear weapons. I can’t say that I’ve heard much about an event like this in debates on nuclear weapons. It also seems that the people who would be most affected by hunger (the global poor) are extremely far from the decision-making process for two other nations to go to war.

Another question that I had was what precautions can be made to mitigate the effects of a nuclear winter, if one were to occur. For example, have agricultural methods and seeds been engineered in order to sustain production in this type of event? Or, are there potential ways to hasten the clearing of the atmosphere (by capturing particles early, for example)? — Jay

Welcome and Introductions

Welcome, everyone. We thought it would be a good idea to briefly introduce ourselves, and the WWS/MAE 353 team is taking the lead here. Please write a two or three sentence introduction about yourself and why you are taking this course. You can also note any questions you have after reviewing the syllabus and highlight topics that particularly stand out for you. We’d like your interests to help determine what we emphasize this semester.

Princeton and Games for Change to Partner on Virtual Reality

We are happy to announce that our team was recently awarded a sizeable grant to expand its virtual reality work together with Games for Change (G4C), a nonprofit corporation that supports the creation and distribution of digital media games for humanitarian and educational purposes. The project will build on engagement at G4C’s annual festivals, including the upcoming VR for Change Summit on August 2, which will bring together developers, storytellers, educators, and researchers using VR, AR and other immersive technologies in new ways. The grant is one of a number of projects recently awarded by the Carnegie Corporation of New York and the John D. and Catherine T. MacArthur Foundation “to support projects aimed at reducing nuclear risk through innovative and solutions-oriented approaches.”

The two-component project will employ virtual reality (VR) to support innovation, collaboration, and public awareness in nuclear arms control, with overlapping benefits to nuclear security. The first component, led by Princeton and geared toward experts, will develop full-motion VR to design and simulate new arms-control treaty verification approaches, with outputs relevant to reducing and securing weapons and fissile materials. With stalled progress toward further reductions of nuclear weapons and countries embarking on wide scale upgrades to their arsenals, building new mechanisms for cooperation in this area is essential. The VR project seeks to establish a new way for technical experts to collaborate that goes beyond the traditional exchange of ideas at conferences and workshops. It aims to offer, in particular, a way to overcome some of the confidence-building challenges that may hinder direct cooperation between countries on how to approach nuclear-weapon and fissile-material monitoring. Cooperative design and simulation exercises will seek to showcase new opportunities for state-to-state cooperation in arms control and nuclear security offered by VR. The project team aims to disseminate the findings to audiences like the Group of Governmental Experts (GGE) on Verification in Geneva through live demonstrations.

Mobilizing the public to engage with nuclear policy issues also remains a critical task for future progress. The second component, led by Games for Change, will therefore develop VR material for the public on the dangers from nuclear weapons and fissile materials. The U.S. presidential election campaign in 2016 and its aftermath has brought to the surface latent public concerns about the risks of deliberate nuclear-weapon use and even nuclear war. The aim of the VR experience will therefore be to show the risks associated with fissile-material stockpiles and large arsenals on high alert as a means to encourage greater engagement by the public in nuclear policy and decision-making. The project aims to build on the already high level of public interest in VR applications, not only for entertainment, but also for news and education applications. Established organizations are beginning to embrace the medium, resulting in more widespread public consumption of information using VR platforms. Results will be featured at the Games for Change Festivals, with the goal of engaging direct industry support for development and widespread distribution through VR app stores.

Should We Be Afraid of Superintelligence?

Like the overall risk assessments that we made in class, the rise of a collective, quality, or general superintelligence seems like an inevitable event, but I find it hard to wager whether or not building a general superintelligence will lead to a Skynet/Terminator situation, or exponential technological growth to the benefit of humanity. Bostrom believes the emergence of superintelligence will be upon us sooner or later, and that our main concern is to “engineer their motivation systems so that their preferences will coincide with ours” (Bostrom). How does one do that? Would this superintelligent entity have the capacity for empathy, or emotions? Would it see the slow-motion of the world around it and feel compassion or pity for its human creators (or not recognize this connection at all after a few iterations of AI existence), or see humans as we see lab rats or fruit flies in a laboratory?

The promise that an “intelligence explosion” contains needs to be evaluated alongside the risk of losing human control of said system. Building a specific motivation, behavior, or task into an AI system can backfire into real-life undesirable outcomes. One example cited where a machine completes its task but fails in its real-life objective is an automated vacuum cleaner whereby if it “is asked to clean up as much dirt as possible, and has an action to dump the contents of its dirt container, it will repeatedly dump and clean up the same dirt” (Russell, Dewey, and Tegmark, AI Magazine 108, referring to Russell and Norvig, 2010). Other classic examples speak of a paper-clip making robot harvesting the world’s metal to create a global supply of paperclips, destroying the world in the process. Similar to Bostrom’s concerns, Russell, Dewey, and Tegmark note the difficulties or absurdity in the idea that an AI could understand law or more nuanced standards that guide human behavior. Supposing that robots could process the laws themselves, these laws rely on an interpretation that includes “background value systems that artificial agents may lack” (Russell, Dewey, and Tegmark, 110).

If we apply these worries to a superintelligence scenario, are we really facing a dystopian world? Perhaps it depends on the type of superintelligence. Whether speed, collective, or quality, all three as described do not define one type or the other as more likely to contain or comprehend human values or at least a respect for life. Rather, the focus here is on output, speed, and cleverness. In place of morals, we have instead use the term “preferences.” Would there ever be a way to make humans the top preference, or would a quality superintelligence see through that in a nanosecond, and reject it in preservation of its own system? Even if we as a society try to prevent an intelligence explosion, per Ackerman’s argument over AI weaponry, it may be a slow, but inevitable march toward this reality given the lack of barriers to entry. On a more separate note, I am curious as to how one characterizes Ava from Ex Machina then, if she is, say, a quality superintelligence. Would such a machine insidiously blend in with society, ie play by human societal rules until it can take over? The converse would be Cyberdyne Systems’ launch of Skynet, and the resulting war between human and machine. As scarily effective Ava’s manipulation of Caleb’s emotions was, I would still prefer that kind of AI to the Terminator. Are only humans capable of morality or compassion, or is there a way to encode it, or create a pathway for AI to develop it themselves? — Nicky

Examining a “Reasoned Debate About Armed Autonomous Systems”

In the article “We should not ban ‘Killer Robots’, and here’s why”, Evan Ackerman responded to an open letter signed by over 2500 AI and robotics researchers. He argued that offensive autonomous weapons should not be banned, and research on such technology should be supported. At the end of the article, Ackerman calls special attention to the use of the term “killer robot”. He claims that some people working in the AI and robotics field have been using this term to frighten others into agreeing on banning autonomous weapons, and that we should really “call for reasoned debate about armed autonomous systems”. While I agree with him that we should not let emotion drive our debate on this topic, this might be the only one of his points that I agree with.

Ackerman’s main arguments have been very well summarized by Stuart Russell, Max Tegmark and Toby Walsh in their response to his paper, published in the same year:

“(1) Banning a weapons system is unlikely to succeed, so let’s not try. (2) Banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil. (3) The real question that we should be asking is this: Could autonomous armed robots perform more ethically than armed humans in combat? (4) What we really need, then, is a way of making autonomous armed robots ethical.”

Admittedly, human, rather than technology itself, is truly the one to blame when some technology is used for evil. But Ackerman might have failed to understand that the AI and robotics researchers are not trying to ban the technology itself, but they just want to prevent a global arm race of AI weapons before it starts. Think about biological weapons, chemists and biologists are pushing the boundary of technology further each day, and the world community is certainly supportive of that. However, we have rather successfully banned biological weapons, because they are notoriously dangerous and unethical. The same applies to autonomous weapons: AI is a fascinating field, where tons of great opportunities arises, but autonomous weapons as a subfield would not be beneficial for our world.

In addition, regarding to Ackerman’s proposition of making autonomous weapons ethical, Russell, Tegmark and Walsh have made an excellent counterargument: “how would it be easier to enforce that enemy autonomous weapons are 100 percent ethical than to enforce that they are not produced in the first place?” From what I know about artificial intelligence, no matter how “intelligent” they are, the AIs still have to follow the logic designed and enforced by human programmers. Therefore, if Ackerman wishes to make autonomous weapons ethical, he will have to make sure that no AI designers meddle with the logic and turn their robot into a cold blooded killing machine. Is that an easier task to do than simply banning all autonomous weapons? I can hardly say yes.

When I examine this reasoned debate, I definitely believe than banning autonomous weapons is an urgent and important task. As someone who wants to work with machine learning and artificial intelligence in the future, I deeply agree with this line from the original letter: “most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so.” — Yuyan

Hackers, Consumers, or Regulators: Who’s to Prevent Cyberattack?

The three pieces for Tuesday’s class present varied strategies for preventing future cyberattacks on U.S. citizens, industry, and infrastructure. O’Harrow Jr. (2012) describes a virtual training site that allows hackers the opportunity to practice defending against cyberattacks. This particular “cyber range,” operating out of New Jersey and founded by upcoming guest speaker, Ed Skoudis, is one of hundreds of sites across the country used to train government personnel to identify potential cybersecurity breaches and efficiently combat cyberattacks. Like the virtual-reality environments used in Tamara’s work to explore various nuclear verification procedures, these simulation exercises may be particularly helpful in identifying and developing better safeguards against potential cyberattacks (e.g., industry protocols, personal device security measures). However, if these simulations are primarily used to train personnel to retroactively address cyberattacks, they are not an effective mechanism of preventing the possibility of cyberattack.

Skoudis (RSA Conference 2017) suggests prevention strategies that consumers and leaders of industry may adopt in order to protect their devices (i.e., those on the “Internet of Things”) from crypto-ransomware attacks. Unlike the aforementioned expert-driven approach to combating cyberattacks, Skoudis demonstrates a grassroots approach, that educates and compels the public to engage in cyberattack prevention. While his talk does a nice job of explaining the intersection of crypto-ransomware attacks and Internet-connected devices, the specific suggestions he provides to safeguard personal devices and networks are technical and not accessible to less-technology savvy consumers (such as myself). Just as public ignorance about non-proliferation treaties will likely negate the role of the public in treaty verification, the complex and quickly-evolving technicalities associated with cybersecurity measures may make it difficult for the general public to meaningfully join in cyberattack prevention efforts.

Further, the KrebsonSecurity piece (2016) highlights that it may be impossible for consumers to change the factory-default passwords hardcoded into the firmware of their personal devices. The piece suggests that cheap, mass-produced devices (e.g., by XiongMai Technologies) are most vulnerable to Internet of Things device attacks (i.e., by Mirai malware) and will pose a risk to other consumers, industries, and infrastructure so long as they are not totally unplugged from the Internet on a wide scale. This piece recommends that some sort of industry security association be developed to publish standards and conduct audits of technology companies in order to prevent the proliferation of devices that are extremely susceptible to cybersecurity attacks. This prevention approach, if effective, would be most proactive (relative to the two previously mentioned strategies) in stopping vulnerable devices from reaching the hands of consumers. However, it is extremely difficult to imagine how this sort of regulatory agency would operate (i.e., intra or interstate) and whether any agency would have enough leverage to overcome opposition to increased industry regulation.

Ultimately, these three pieces discuss cyberattack prevention measures that require the efforts of three vastly different actors (i.e., trained government personnel, the general public, a state-run governmental agency). Whether any of these strategies is particularly feasible and/or effective (or at least more so than the others) deserves further attention. — Elisa

Does Deterrence Work Against Cyber Terrorism?

In February of 2017, Joseph Nye wrote the article Deterrence and Dissuasion in Cyberspace, in which he discusses the applicability of the concept of deterrence to the realm of cyber. Nye distinguishes the cyber realm from deterrence in the context of nuclear weapons, noting the inherent challenges that ambiguity and attribution pose in the cyber landscape. As a result, cyber actions often land in a “gray zone”, between war and peace, with the perpetrators hiding in the shadows of several remote servers. However, Nye argues that four key mechanisms, depending on the specific context of who and what, can be applied to help deter and dissuade: threats of punishment, denial, entanglement, and taboos/norms. In discussing these mechanisms, Nye argues that entanglement, such as economic interdependence, renders the threat of a surprise attack by a major state rather unlikely. Moreover, citing the example of biological and chemical weapons, Nye believes that international norms and taboos can be leveraged to increase the stigma around attacking certain types of targets in peacetime, raising the reputational costs of such an attack.

However, I remain relatively unconvinced of the ability to deter terrorists from conducting a cyber-attack. Nye admits, “As in the kinetic world, deterrence is always difficult for truly suicidal actors such as terrorists who seek religious martyrdom”, but asserts that, “thus far terrorists have used cyber more for recruitment and coordination than for destruction…At the same time, even terrorists and criminals are susceptible to deterrence by denial.” (CITE) However, the U.S. lacks much of the leverage that they wield over traditional states. That is to say, without the ability to strike back at an electrical grid, without the risk of threatening their economic dependence on the U.S. – can the U.S. credibly deter cyber-attacks from terrorist groups? Groups such as ISIS flagrantly disregard international norms, and display an affinity for utilizing the latest internet technologies.

I agree with Nye that, thus far, criminals and terrorists have opted to utilize cyber resources for coordination and recruitment, and likely at this point, ISIS lacks the technical expertise and operational capacity to execute a large-scale cyber attack. However, cyber defense has thus-far proved to be rather porous, and the number of targets is ever increasing with the Internet of Things. Moreover, similar to the rise of DIY biological engineering, a burgeoning wave of interest in the internet and computer science has emerged, diffusing knowledge across the globe. While right now, one might believe it rather unlikely that ISIS would be able to execute a cyber-attack, if they were to develop the capacity, do people believe that terrorists could be deterred from utilizing cyber-attacks? — Olivia

The Opportunities and Limits of Societal Verification

The Opportunities and Limits of Societal Verification, by Kelsey Hartigan and Corey Hinderstein, makes the case that work done by non-government parties (societal verification) has an important role to play in arms control verification. The article discusses various models for societal verification, its challenges, and how it can be utilized by governments. The article concludes that the best way for societal verification to be used by governments in arms control verification is through the use of networks of outside experts. These experts will serve as “canaries in the coal mine”, whose findings get the attention of government officials that have the final say. The article also makes the suggestion that public (open source) information should also be used by the government. However, because the article doesn’t focus on outside experts, it is vague in discussing important details of how outside experts can be utilized, how they can be helped by the government, and what are potential pitfalls of utilizing them.

The focus of the article is pretty broad. It primarily discusses opportunities for arms control verification that have arose from the popularity of the internet. Namely, a vast amount of data that is important for verification is available on the internet and this data can be accessed by many people not affiliated with the government. This is relevant for arms control since many non-government weapons experts in places like academia can easily find data, such as photos, on sensitive military equipment from traditional and social media, online. These experts can use this online data to discover arms control treaty violations and other important facts.

One example of this that the article mentions is the investigation of North Korean Transporter Erector Launchers (TELs) by the Arms Control Wonk network/blog. This was a case where academics and others outside of the government compared photos of TELs from a North Korean military parade to photos of Chinese TELs from social media to uncover the transfer of TELs from China to North Korea, in violation of sanctions. This transfer was not publicly known until it was discovered by these non-government experts.

This scenario clearly demonstrates that outside experts have important contributions to make to arms control verification. Thus it would be interesting to discuss how outside experts can be helped by the government, and what are possible downsides of using their work. However, the article choses not to focus on these issues and instead discusses seemingly less important topics.

An example of this is the subsection on “Data Management”. The subsection begins with the claim that “it will be essential to develop a framework” for data collection and dissemination in a “consistent, user friendly format”. It only becomes clear what this means when the subsection later suggests “WordPress” (a popular blogging platform and program) as a possible solution for this problem. Thus, it appears to be saying ‘blogs should be used to publicize research’. The rest of this subsection also illustrates another issue I had with the article as a whole: it uses buzzwords seemingly for the sake of using them. Specifically, the subsection adds that “Innovations in cloud computing” and advances in “big data” will help with challenges in societal verification, without discussing these challenges in any depth.

I think it would have been more useful if the article discussed the relationship between government and outside experts in greater detail. In particular there were a few topics related to this, that seem worthwhile exploring, but were not discussed.

One of these is the motivation of the outside experts. Although some outside experts are currently motivated to do societal verification, maybe more research would be done if the government provided incentives for societal verification. These incentives could be monetary, for example, by providing a reward to researcher that discover a sanctions violation. However other kinds of incentives might effectively motivate more researchers as well.

Another topic that wasn’t really discussed is the public nature of the discoveries, and the challenges this poses. Because sources are revealed in societal verification, this allows the offending government to prevent similar disclosures in the future. For example, in the TEL case discussed above, North Korea now knows not to display sanctions violating equipment in photos of military parades, since blog posts containing the pictures have revealed a sanctions violation. However, if the violation were discovered by an intelligence agency using the same sources, North Korea may never learn how their sanctions violation was discovered. Although the article does discuss techniques like censorship as one way governments can frustrate societal verification, it doesn’t really discuss this cat and mouse game aspect of societal verification. — Jonathan