Norms of Cyber Behavior

In his paper, Deterrence and Dissuasion in Cyberspace, Joseph Nye covers the challenges of deterrence in cyber warfare. Nye defines deterrence as anything that prevents an action by convincing the actors that the cost of an action outweigh its benefits (Nye 53). Nye argues this broad definition better captures the breadth of options available to states to prevent cyber attack, and he discusses four of these options, including “threat of punishment; denial by defense; entanglement and normative taboos” (Nye 46) in his paper. From these four options, Nye argues there is no “one-size-fits-all” (Nye 71) deterrence strategy for cyber attacks, and that traditional understandings of deterrence theory must adapt to respond to emerging technological threats.

The bulk of Nye’s paper is spent explaining four possible types of cyber warfare deterrence: “threat of punishment; denial by defense; entanglement; and normative taboos” (Nye 46). The first two – “threat of punishment” and “denial by defense” – fall into traditional understandings of deterrence (Nye 55). Punishment for a cyber attack could entail response in in kind, with economic sanctions, or with physical force (55). Denial by defense could entail heightened monitoring of threats and creation of cyber security, intended to convince attackers an attack was too costly to execute (57). Both these strategies are limited by the fact that the originators of cyber attacks are often anonymous (50-51) and “persistent” (57), making it difficult to respond to all potential cyber attacks effectively.

The second two deterrence strategies, “entanglement” and “normative taboos” (46), fall into a broader model of deterrence. Entanglements of modern states’ interests reduce the likelihood of attack because an attack could be detrimental to the attacker’s state as well (58). Entanglement is a particularly strong deterrent between large, economically dependent states (58). “Normative taboos” (46) reduce the likelihood of attack because an attack damages the prestige and “soft power” of the attacking state (60). Norms against attacks on civilian infrastructure may be particularly strong deterrents (61). Taken together, these four strategies could be used to prevent cyber attacks.

Of all the strategies, I was most interested in the “normative taboo” method of deterrence. Last week, we had an interesting discussion normative (“humane”/”inhumane”) constraints on bioweapons. To me, creating and enforcing norms for cyberwarfare is even more challenging, because the real-life consequences of virtual actions often feel more remote than those from real-life actions. People are often more willing to pirate a movie than steal a physical copy; kids are often more willing to bully their peers online than in person. And unlike the case of nuclear bombs or deadly pandemics, we haven’t yet seen large scale destruction from cyber attacks. I am interested to learn more about establishing cyber warfare norms from the other readings – and from all of your exciting replies! — Grace

6 thoughts on “Norms of Cyber Behavior

  1. The RAND Corporation identifies cyberwarfare as “its own medium with its own rules,” since it is difficult to identify attackers, and since attacks can come from so many different sources — from gifted hacker households to government-organized cyber armies. Though deterrence has worked in the past to prevent nuclear conflict, cyber-deterrence may not work well for a key reason: informational problems.

    In the Cold War nuclear realm, deterrence worked because the two superpowers knew that using their weapon could feasibly lead to the end of the world — both sides clearly had a lot to lose. The limits of cyberwarfare, meanwhile, are not so clear: cyberwarfare targets civilians and businesses in ways conventional and nuclear weapons cannot. It is furthermore not clear cyberattacks will lead to the end of the world, because the scope of the damage is not tangible. The cyberattacker thus does not perceive he/she/they have as much to lose as the nuclear first-striker does, since the nuclear first-striker risks the end of its own country.

    Beyond this, governments had control over nuclear weapons, while cyber attacks can be initiated both by government agents and civilians. For example, China’s cyber-strategy is not characterized by a unilateral cyber army: for the most part, according to Foreign Policy Magazine, China’s hackers “spring up organically.” There are tutorials like “how to become a hacker in a week” that pervade (China’s google). China’s new wave of “red hackers” are home-grown, not government-monitored; China also has a standing cyber-army, though. This means that attacks can come from all angles, from innumerable sources.

    Given the discrete natures of attacks (i.e., it is difficult to trace an attacker), it is difficult to deter the attacker, since there is a good chance they will get away with the attack.

    The RAND corporation further identifies the problem with adopting a formal deterrence strategy. Deterrence strategies require the threat of a response. Thus, when an attack has obvious effects, people would expect a response, even if the source is non-obvious. Deterrence strategies thus create a “painful dilemma”: “respond and maybe get it wrong, or refrain and see other deterrence postures lose credibility.”

    Thus, the pervasiveness of informational problems in the cyber-realm make cyber-deterrence untenable: the threat of response may not be known to be destructive enough to make the attacker feel like the cost of attack outweigh the benefits, especially because the source of the attack is often non-obvious.

  2. Grace, I think that you raise interesting points regarding the normative model of deterrence in cyberwarfare. Many of the authors highlighted the current absence of international norms regarding the internet, and suggested that this “anything goes” situation is dangerous for everyone involved. As was mentioned in the articles, the developed norms against the use of nuclear weapons and poisons may serve as models. An obstacle to the development of these norms for cyberwarfare referenced in the readings was the difference between states’ understanding of certain matters, for example the questions of human rights and free expression. I think that these ideological barriers are just as threatening to the idea of deterrence as some of the other points raised.

    In response to Aaron’s point, I don’t think that the challenge of identifying the responsible actor in a cyberattack makes deterrence entirely impossible. There are some cases in which the attacker may be apparent, or in which a party may claim responsibility. Additionally, the numerous calls for increases in our capacity to identify the origin of cyberattacks in the readings may (hopefully) bring about advances that make identifying the origin of attacks more feasible. While I don’t think deterrence should be the only tactic employed in this realm, I think that despite its flaws it can still be an element of a larger, multifaceted strategy.

    I think that these articles potentially understate the danger of a cyberattack. While an assault of this nature may not immediately bring about the level of damage caused by a nuclear weapon, a cyberweapon could result in significant loss of life, for example through attacking vital medical infrastructure or jeopardizing transportation systems. Establishing normative taboos against these types of attacks seems absolutely necessary, just as norms have been established against other types of warfare that target civilians. The important distinction is not relying entirely on taboos and/or deterrence. This is not the only sphere where the limits of deterrence have emerged: there are strong international norms against targeting civilians in acts of terrorism, and yet these incidents continue to be perpetrated. Just as non-state actors tend to be more willing to violate norms against terrorism, they may also be more likely to violate cyberwarfare norms, requiring the adaptation of a comprehensive strategy similar to strategies against terrorism that include deterrence but also espionage, defensive infrastructure, and other elements.

  3. Nye talks at length about the differences between nuclear warfare and cyberwarfare, particularly focusing on the strategy of deterrence in both contexts. In nuclear warfare, it’s difficult to defend against an attack, so deterrence by denial (denial of gains from an action) is not very effective. Instead, there is a focus on deterrence as punishment (credible threat of punishment for an action). Cyberwarfare, however, has an attribution problem — it’s difficult to determine the source of an attack. Therefore, there is a low threat of punishment and instead, a focus on deterrence by denial.

    I think the biggest takeaway from the article is that there is no cut and dry, blanket, 100% effective means of dealing with cyberattacks. An appropriate response is very much dependent on the “how, who, and what.” This is especially relevant as our technological expertise increases. Nye mentions that with time, better attribution forensics may increase the role deterrence by punishment and better defense via AI/ML may increase the role of deterrence by denial.

    I agree with Charlotte’s point about our underestimation of the danger of cyberwarfare. I think that especially with the rapid development of AI/ML in recent years, we will see the increased proliferation and severity of cyberattacks. I believe that danger lies in the confluence of two trends — the increased reliance of government/commercial/residential infrastructure on internet based systems (large parts of our society are vulnerable and will suffer immensely from a cyberattack) and the increased the power of cyberwarfare + AI in exerting extreme damage.

    Even today, we see actors using AI to reduce barriers in enacting cyberattacks. Examples include using automated hacking to expose both existing software vulnerabilities and human vulnerabilities (ex: speech synthesis for impersonation), and to enact previously labor intensive cyberattacks such as spear phishing. AI will eliminate the current tradeoff between scale and efficiency of cyberattacks. However, we also see ML being used on the defense to detect and prevent attacks before they occur. Supervised learning can learn from known threats and generalize to new threats. Unsupervised learning can detect suspicious deviations from normal behavior. We see this manifest in endpoint detection and response platforms, which make use of heuristic and ML algorithms to protect against sophisticated, targeted attacks.

  4. I think you bring up a great question about the difficulty of attribution and what that means for foreign policy. I would like to argue that the best method for attribution is to ask, who has the capability to execute an attack of this complexity? and from that answer, who has the incentive to do so? Knowing cyber capabilities of adversaries (through leaks, intelligence collecting, and using known previous attacks) along with clues from the virus’ internals, can serve as a better guide to attribute attacks and consequently an appropriate response, as opposed to defining a protocol for response. The problem with a protocol is that it illustrates how an actor can attack you and attribute it to another. An Example being, India declares any message they receive that begins with “attack India at dawn -love, Pakistan” allows for China to attack India and pin it on Pakistan. The problem with the method I argue is that it depends upon answering the question who has the capability to do this? if it is an attack that is not very complex the answer is a very long list of states, non-states and possibly individuals. Even with this method of attribution, you may know that a state attacked you but there may not be enough evidence to support a multi-lateral action. Should we try to push certain requirements of evidence that would suffice to support others in responses? would this be a viable method to deter cyber-attacks?

    A few examples of the argument I make to think about, Stuxnet was only attributed to US and Israel when the complexity of the code was analyzed along with the realization it targeted Iranian centrifuges. Even the extremely interesting story of how the FBI arrested the creators of the MIRAI botnet and perpetrators of the 2016 Dyn attack (which brought down netflix, twitter and other major sites by attacking critical internet infrastructure was solved by tracing who was attacked, who had the capability and using clues of the virus to catch the creators.

  5. Based on the readings, and all the replies, it is clear that cybersecurity not only have room for improvement, but blurs the geographic and national distinction within the traditional warfare paradigms. What’s so dangerous about cyber warfare is the gap between attack effects, identification, and feasible responses. In a way cyber warfare should be treated like terrorism, as there are some homologous traits as mentioned previously. How you counter a terrorist vs how you counter a cyberterrorist is a matter of geography. What I find interesting is that transnational or supranational cooperation wasn’t well elaborated. There is a point to be made that information sharing amongst like-minded state can build upon existing security structures and, the capacity to standardize responses to cybersecurity threats. One possible situation would be to not condone and consistently enforce (if capable in the future) any cyber warfare measures used by a 3rd party against one’s competitor no matter the benefits received by such an act. Such a Cyber Security Union could provide society with a safety net at worst or a police force at best. Hopefully this would incentivize either the externalization of threats or the inclusion of competitor states within such a security regime.
    Finally, I think we can all agree that technology is integral. The better we are able to track and ID an attack, the better we are able prosecute, retaliate, or punish. I am hopeful that AI and quantum computing may open new avenues for electronic safety measures and that greater interconnectivity will de-incentivize state-actors or state salutary neglect.

  6. An important point in need of emphases is that retaliation as a form of cyber deterrence policy need not be limited to cyber retaliation. The DoD Cyber Strategy paper details cyber-retaliation but does not touch on how we might use other forms of retaliation. Nye cites diplomatic, economic, cyber, physical force, and nuclear force as potential modes of redress for cyber enemies. These steps elevate the matter of abiding by cyber norms to the appropriate level of gravity and makes it much more of a real world issue.

    There is much talk of norms as a deterrence method in the cyber community, but they seem to be drastically ineffective. The inefficacy of such measures is well exemplified in a 2015 cyberattack that targeted the Ukranian power grid. This attack, likely waged by a Russian-sponsored organization, was waged “just months after the GGE released its report putting critical infrastructure off limits to attackers” (Hampson-Sulmeyer). Establishing norms evidently does little to deter aggressors on the state level from explicitly violating normative protocol. Such policies, of course, have even less of an effect on non-state actors. While normative policies will ultimately do little to deter civilian hackers, they may ultimately have a real effect on foreign governments with clashing agendas if these policies are bolstered by explicit threats. If the U.S. were to devise a specific set of actions to taken against countries who violate cybersecurity norms, then the norms might actually be followed.

    As is the case with non-cyber based retaliation, Nye’s proposition of entanglement is largely missing from the DoD Cyber Defense document. Nye believes it is possible to deter hostile forces from initiating cyberattacks with the United States, but this measure seems largely ineffectual. While this deterrent of entanglement may work to some extent at the nation-level, such a strategy (like many others) does not work in the case of private, nongovernmental hackers. I suspect it is also missing from the DoD paper because such relationships take years to cultivate. This strategy may have some success up to the point where the aggressor is seeking to start an all-out war.

Leave a Reply