Hackers, Consumers, or Regulators: Who’s to Prevent Cyberattack?

The three pieces for Tuesday’s class present varied strategies for preventing future cyberattacks on U.S. citizens, industry, and infrastructure. O’Harrow Jr. (2012) describes a virtual training site that allows hackers the opportunity to practice defending against cyberattacks. This particular “cyber range,” operating out of New Jersey and founded by upcoming guest speaker, Ed Skoudis, is one of hundreds of sites across the country used to train government personnel to identify potential cybersecurity breaches and efficiently combat cyberattacks. Like the virtual-reality environments used in Tamara’s work to explore various nuclear verification procedures, these simulation exercises may be particularly helpful in identifying and developing better safeguards against potential cyberattacks (e.g., industry protocols, personal device security measures). However, if these simulations are primarily used to train personnel to retroactively address cyberattacks, they are not an effective mechanism of preventing the possibility of cyberattack.

Skoudis (RSA Conference 2017) suggests prevention strategies that consumers and leaders of industry may adopt in order to protect their devices (i.e., those on the “Internet of Things”) from crypto-ransomware attacks. Unlike the aforementioned expert-driven approach to combating cyberattacks, Skoudis demonstrates a grassroots approach, that educates and compels the public to engage in cyberattack prevention. While his talk does a nice job of explaining the intersection of crypto-ransomware attacks and Internet-connected devices, the specific suggestions he provides to safeguard personal devices and networks are technical and not accessible to less-technology savvy consumers (such as myself). Just as public ignorance about non-proliferation treaties will likely negate the role of the public in treaty verification, the complex and quickly-evolving technicalities associated with cybersecurity measures may make it difficult for the general public to meaningfully join in cyberattack prevention efforts.

Further, the KrebsonSecurity piece (2016) highlights that it may be impossible for consumers to change the factory-default passwords hardcoded into the firmware of their personal devices. The piece suggests that cheap, mass-produced devices (e.g., by XiongMai Technologies) are most vulnerable to Internet of Things device attacks (i.e., by Mirai malware) and will pose a risk to other consumers, industries, and infrastructure so long as they are not totally unplugged from the Internet on a wide scale. This piece recommends that some sort of industry security association be developed to publish standards and conduct audits of technology companies in order to prevent the proliferation of devices that are extremely susceptible to cybersecurity attacks. This prevention approach, if effective, would be most proactive (relative to the two previously mentioned strategies) in stopping vulnerable devices from reaching the hands of consumers. However, it is extremely difficult to imagine how this sort of regulatory agency would operate (i.e., intra or interstate) and whether any agency would have enough leverage to overcome opposition to increased industry regulation.

Ultimately, these three pieces discuss cyberattack prevention measures that require the efforts of three vastly different actors (i.e., trained government personnel, the general public, a state-run governmental agency). Whether any of these strategies is particularly feasible and/or effective (or at least more so than the others) deserves further attention. — Elisa

26 thoughts on “Hackers, Consumers, or Regulators: Who’s to Prevent Cyberattack?

  1. While some of these strategies do seem more feasible than others, in order to successfully protect against cyberattacks, all actors do have to make an effort. As Elisa points out, simulations can only go so far as to train cybersecurity personnel to respond retroactively. In order to prevent the possibility of a cyberattack, we must maintain a secure infrastructure, just like we try to maintain healthy human bodies to prevent infection. This, of course, requires action on all parts. Industry must create secure devices; Elisa asks about the organization of a regulatory agency, and I think we can look to the government to regulate and set standards accordingly, with organizations like the FCC or NIST. As for consumers, it’s cute to think that people care enough about security to proactively safeguard their personal devices, but in reality, consumers don’t have the desire (or technical know-how) to do so. As a result, I do think the most feasible strategies are to count on industry and federal regulatory agencies to create a secure network in order to proactively prevent cyberattacks.

  2. Elisa’s comments on the cyberattack prevention measures discussed by O’Harrow, Skoudis, and in the KrebsonSecurity piece culminate in questioning the effectiveness of each. As someone ill-informed on many topics relating to technology and more specifically, cybersecurity, I don’t think I would be able to offer any insight into the feasibility or effectiveness of the prevention measures introduced in the pieces, but I do think it helpful to bring various policy implications into the conversation.

    The articles made it very clear that the vulnerabilities in our computer networks and infrastructure systems are a huge threat to our national security. While they detailed different attempts to combat these cyberattacks, they did not necessarily cover the legal framework surrounding topics of this sort. For instance, how do policymakers go about creating legislation regarding cyberspace when seemingly so little is known about it in the first place. O’Harrow noted that the reason our networks are so vulnerable to attacks is because no one understands cyberspace well enough to ensure security. If experts on the matter do not even comprehend these complexities, I think we might be wary to entrust the legal aspects of cybersecurity to policymakers. At the same time, as ground efforts to combat cybersecurity attacks continue, it seems only practical that legislation would proceed accordingly.

  3. In answer to Elisa’s plea for further attention, I think that Serena has the right idea. We should “count on industry and federal regulatory agencies to create a secure network in order to proactively prevent cyberattacks.” This seems to the correct answer because it: 1) addresses cybersecurity issues before they arise and 2.) does not rely on an undependable grassroots movement to become popular.

    As we have seen from non-proliferation awareness, the general public should not be trusted to educate themselves on the dangers of cyberattacks. Too many people are ignorant to the dangers of phishing and other hacker-go-to attacks. And that does not seem to be issue that we need to primarily address, because we are most worried about attacks on our infrastructure or economic system.

    Instead, regulations need to be increased on how corporations conduct cyber security. Encryption should be the norm for these large companies, and secure networks should be built to prevent intrusion and manipulation. However, one of the readings mentioned that increased cyber regulation is infeasible, because it is too expensive to re-configure corporation computer systems, and therefore it is unattractive to those entities. To counteract this hesitation, I think the government should make it clear that the nation’s well-being is at stake when it comes to the cyber-security of infrastructure corporations.

    I think essential companies need to be obligated to maintain cyber security, because the general public relies on them for continued service. This private approach would certainly prevent catastrophes, like the mythical nation-wide blackout of power companies.

  4. There are two realms of cyber-security to be considered. As Elisa points out, the first one is governmental cyber-security whose role in a nutshell is to ensure that no digital information that can compromise national security gets fetched and that the systems of communication and intelligence do not get compromised. As both Olivia and Elisa point out, the second one is the corporate industry realm and the privately owned devices it produces, which needs to be safeguarded to protect the individual’s privacy and property (i.e. bank accounts) to ensure a secure way of living in the digital age for all. The big difference between those two realms is the extent to which it is possible to defend against potent cyber attacks.

    For the former, frequently revised regulation, constantly improving intelligence systems and cyber technologies is not only a known way to improve governmental cyber-security, but it is also something the government can definitely implement because it has full control over it. On the contrary, for the latter, this is simply not the case.

    Private industry is hard to regulate to begin with because of companies resisting regulations that forces them to increase cost to raise security standards. Even more importantly, device vulnerabilities are hard to identify even when an organization is striving to produce the most secure device. Thus, there are always windows of opportunities for hackers to identify those vulnerabilities and gain a lot out of it at the expense of device users. In other words, even if the government successfully regulates the industry, this is not enough because it is extremely challenging to develop the most secured product.

    Evidence of the is the October 21 IoT DDoS attacks that caused widespread disruption of legitimate internet activity in the US. These attacks were made possible by the large number of unsecured personal internet-connected digital devices, such as home routers and surveillance cameras (specific cause: use of default passwords on these devices). This situation could not have been directly averted by neither the industry nor the government, but only by the users of the devices themselves who typically think ‘Why bother?’ when it comes to securing their devices.

    Exploitation of unsecured digital devices on the Internet by malicious code can seriously disrupt daily life and economic activity in America. Although governmental cyber-security can be successfully safeguarded, it is highly doubtful the same can be said for the security of the digital networks of the devices used by the general public. Therefore, unless the government finds a way to affect the behavior of the users of internet-connected devices, we should expect many more cyber attacks leading to social and economic disruptions such as the one caused on 10/21/2016.

  5. It’s no surprise that providing adequate cyber-security is hard. In her comments, Elisa offers criticisms of three mechanisms to defend against cyber-attacks. The simple response to these criticisms, is yes, the arena of cyber-security is still complex and changing. O’Harrow, 2012, even mentions, “Networks in the United States will remain vulnerable to attacks for the foreseeable future because no one understands cyberspace well enough to ensure security.” We cannot expect each of these methods to provide perfect cyber-defense alone, but they do offer interesting possibilities for comprehensive defense in the future.

    The development of each of these defense technologies acts as a catalyst to achieving effective cyber-security. It is important that new approaches to cyber-security emerge, because new technology means new research and more government funding. Though we can debate the effectiveness of Skoudis’s CyberCity, for example, it nevertheless is promoting research in this arena. The Department of Homeland Security and National Science Foundation are happy to help in the advancement of these technologies, giving funding for ambitious cyber-research.

    Focusing on the shortcomings of cyber-security at present does a disservice to the development of technology vital to national security.

  6. A few of the above comments mentioned the difficulty with this form of defence and defence training being that it can only be done retroactively in response to an attack that has already occurred. This, as far as I can tell from the articles, seems to be a fair assessment and implies that defence can only improve in response to an attempted attack, as we are unable to foresee how a potential attacker would approach given the complexity of cyberspace. However, an interesting thing to think about is that the U.S. doesn’t have to wait for foreign attackers in order to improve its defence systems and learn, when it can instead attack itself through these training mechanisms as mentioned in the O’Harrow piece. In this sense, we can perhaps consider this approach as being somewhat proactive.

    We are told that “attackers hold a huge advantage. They can choose the time, place and method of strikes. Defenders almost always have to settle for reacting, making fixes after the damage has been done.” But we are also told that, in the case of CyberCity, “the military will practice attacking and defending the computers and networks that run the theoretical town”, not merely defending. Skoudis remarks that “the problem is the bad guys are getting better much faster than we are.” What appears to be a potential area in which the U.S. can go a long way to protecting itself, then, is to work on training its “cyberwarriors” to innovate new potential methods of attack, and thereby stay ahead of the “bad guys.” In this way the defence side of the system can learn how to defend against this new form of attack with no risk, effectively learning how to defend against a potential future attack proactively before it happens. This is likely something the U.S. does already, though it isn’t stressed in the article, and it seems to me that this is an important defence strategy to take note of.

  7. As others have mentioned in the blog posts above, cybersecurity is an undefined field because of the lack of understanding around the technology. Just as policy makers and regulators cannot easily keep up with the CRISPR innovation and its fast-pace growth in applications, it is difficult for them to fully understand the implications of emerging cyber technologies. Cyber attacks only require technical expertise and intent, and do not require sizeable capital expenditure and physical delivery that biological weapons necessitate. Furthermore, the near digital-only footprint makes it much more difficult to apprehend perpetrators, especially if they reside in different jurisdictions. A seeming lack of enforcement response, a poorly understood technology, and easy independent action creates a near-infinite range of possibilities of attack as discussed by O’Harrow (2012). I agree with Elisa and Serena that a government regulatory body should be established to begin preventative preparation in addition to reactionary preparation in the cyber ranges. Similar to the EPA and the Clean Power Plan, the regulatory body could set best available control technology standards for new technologies (such as IoT devices) and begin forcing a transition for all users to reasonably available technology without significant burden.

  8. I echo Yannis’s statement that there are two disparate forms of cyberattack to be addressed: the first attacking national or governmental security, and the second attacking personal security. In his comment, Yannis argues that the first form of cyberattack is best prevented by the government. This can be done through regulation, intelligence, and cyber defenses.

    I agree that the entity primarily responsible for national cyber defense should be the national government. Furthermore, it should be the government’s responsibility to ensure that our most important resources – such as water and electricity – are fitted with the best possible cyber defenses. The private companies that may be responsible for the distribution of such resources must work hand in hand with the government and our best cybersecurity resources to build adequate defenses.

    With that said, I do not think that the government is the only entity that can work to prevent national cyberattacks. In her blog, Elisa addresses that hackers can only be trained to retroactively combat cyberattacks. While this is a fair point, I believe that hackers may be the best equipped group to detect cyberattacks early on. Furthermore, the best hackers and cybersecurity people often go private, rather than joining the government. Thus, I think it is worthwhile for the government to create some sort of grant system, where hackers and cybersecurity experts have financial incentive to search for, and fight against, cyberattacks that the government may miss. In so doing, there is an additional layer of defense (that resembles the societal verification of arms control) to ensure further protection and intelligence.

    I believe such a grant system could also be in place to protect consumers against the type of cyberattack that targets personal security. However, since this type of security is needed in the private industry, I believe that the producers should be funding these grants, rather than the government. (This idea could be controversial, however, as this results in an enormous disadvantage to smaller and younger companies that are the most likely to have holes in their security and also the least likely to be able to afford paying hackers these grants – but that is a conversation for another blog post).

    Finally, on the topic of private security cyberattacks, Elisa discusses the infeasibility of civilians learning enough about information security to adequately defend themselves. While I agree that this is unlikely, I do think that the consumer has a responsibility to become more informed about the risks they assume with each product. As Yannis stated, many consumers take the ‘why bother?’ approach when it comes to private security. Often people have this approach because they do not fully understand the risks in the fine print. Have you ever downloaded an app and selected “I agree” to the terms without actually reading the contract you are agreeing to? Such practices are the source of many privacy violations – the consumer is ignorant to the fact that they have agreed to lose their privacy from the beginning. Thus, I think there should be regulatory pressure on the producer to make the “fine print” more clear and accessible to the consumer, and added pressure on the consumer to read the fine print before blindly agreeing to sacrifice some of their rights.

  9. As seen in O’Harrow’s article in the Washington Post, cyber-defense is reactive. Given the proliferation, flexibility, and unpredictability of cyber-attack, it is near impossible to prevent cyber-attack in its entirety. Much like trying to plug a leak in a dam, once one leak (or attack) is stopped, another appears. Cyber-defense, whether in a military or civilian application, must then be focused on one of the real dangers of cyber-attack: its ability to be applied on an overwhelming scale.

    Cyber-defense poses its greatest threat through incapacitating an entire infrastructure. In terms of national security, it’s relatively inconsequential if a thousand people’s bank accounts or emails become compromised by cyber-attack. However, if this is scaled to the millions, or if the theft of personal information leads to the compromise of even more systems, the cyber-attack becomes problematic. Isolated networks, such as the national grid or military intelligence networks, can be individually protected due to their scale and unique susceptibility for targeting through cyber-attack. However, more open networks such as the internet are particularly vulnerable to these large-scale cyber attacks. Therefore, although anticipating and preparing for future attacks – as suggested by the article and much of the discussion on this blog post – might help prevent certain attacks from occurring, it does not present a holistic solution to the problem. And, as brought up by previous posters, individuals cannot be expected to have the know-how, or even the concern, to protect themselves against a slew of different cyber-attacks. However, stemming the spread of a cyber-attack in its early stages might be a more effective solution to the cyber defense dilemma. Perhaps the inevitability of cyber-attack must be accepted, but technologies and safeguards can be put in place to track and stunt the sources of cyber-attacks before they can evolve into their final form. In this way, these kinds of spreading cyber-attacks could possibly be modeled like an infectious disease. If authorities are proactive and able to take preventative measures akin to “vaccination” or “quarantine”, the threats can be subdued.

  10. To shift the conversation a bit, I would like to bring up an arena of government regulation that was touched on, not explicitly, but “between the lines” in this weeks readings. The Krebson Security article mentioned that the source code for the Mirai malware was released, “effectively allowing anyone to build their own attack army using Mirai.” So, where do hackers post dangerous source codes, and what does the US government do to combat this? This is relevant to our course, as this is a case in which the proliferation of information and technology can be dangerous, and this is relevant to the discussion in this thread about government regulation, as measures can be taken to mitigate the proliferation of dangerous source codes and other information that would be useful for future hackers.

    In a course I took on radical innovations, we learned that there are markets on the “deep web” (the illegal portion of the Internet that actually makes up about 96% of the Internet) in which hackers can post source codes and buy and sell “Day 1 Flaws” found in popular electronic devices. A “Day 1 Flaw” is a flaw in the coding of an electronic product that leaves it vulnerable to cyber attack. While there is no way to prevent a hacker from posting in the first place, the US government has practice regulating the deep web as it regulates drug markets on the deep web with frequency. Perhaps government hackers could at least learn to find and destroy postings such as the ones explained above to slow the spread of dangerous information between hackers. So, one of the functions of the agency that people are calling for in the comments above could be to monitor and thwart black-market behaviors of hackers.

    This may even be where the real danger lies, with individual actors as opposed to nation states. The article in the New Yorker stated that Chinese officials have explained “a cyber war attack would do as much economic harm to us as you.” For this reason, I think increased monitoring of individual actors on hacker black markets would be a valuable addition to government strategy.

  11. Just to bring in another element into this conversation, Xiongmai Technologies immediately recalled 10,000 of its faulty devices following the October 2016 DDoS attack. Apparently, it had patched the flaws with its products in September 2015 so that the new devices ask the customer to change the default password when used for the first time, but products running older versions of the firmware are still vulnerable. Although this certainly does not absolve Xiongmai of blame, the average consumer could also be better informed of the potential security risks involved with using their devices. This is particularly poignant since most technology users do not go great lengths to protect their online security, with easy-to-guess passwords, etc. Public awareness campaigns by both governments, private individuals, and corporations could play an important role in this regard.

  12. To follow up on Rebeca’s comment, while such a strategy of going after hackers on the deep web may be helpful if implemented effectively, it would be tough to do so – after all, it’s basically akin to going after the hackers on their home turf. Instead, I believe that there is far more utility in using information from the deep web, such as vulnerabilities that have been identified in different programs or hardware, to prepare better cyber-defense mechanisms that can mitigate the effectiveness of attacks. Essentially, using the hackers’ information sharing techniques against them.

    In fact, public-private information sharing is an area in cyber-security that we’ve discussed at length in my cyber-security seminar – that is, the implementation of systems through which the government can work with private companies, especially those in industries that have critical national security implications, to pool cyber-security resources and collectively find the best ways to prevent against threats. Over the past few years, various sectors of so-called “critical infrastructure,” such as financial services, health, or aviation, have organized ISACs (Information Sharing and Analysis Centers) with the purpose of facilitating the sharing of information about hacks and working together to improve cyber-security across the industry. If relevant national security institutions can get more involved, though – such as using information that they collect through surveillance or from scouring the deep web – then it can allow private companies to get an even better sense about the threats that they face and implement systems to defend against them.

  13. I agree with Alex on the reactive nature of cybersecurity. I will add that I believe that cybersecurity must be reactive, otherwise government agencies are simply infringing upon the citizen’s right to privacy. If cybersecurity becomes too proactive, such as mentioned in the Hersh reading for Thursday, the government takes on too much of a proactive role in cybersecurity. For example, the Clinton administration (under pressure from the N.S.A.), said that it “would permit encryption-equipped computers to be exported only if their American manufacturers agreed to install a government-approved chip” (Clipper Chip) in each (Hersh, 15). The privacy community then discovered that this would enable law-enforcement officials to have access to data in the computers. This is a clear violation of one’s reasonable expectation to privacy, and this shows that cybersecurity kind of has to be reactive rather than proactive.

    This is why I side with Skoudis and O’Harrow in this realm. As Elisa mentioned, Skoudis discusses prevention strategies that consumers and leaders of industry can use against cyberattacks. As she also insightfully mentions, this may not be accessible to your basic consumers because they are highly technical. But, in this age as our generation increasingly becomes aware of the “Internet of Things”, I think it is about time that we all begin to get a grasp on secure and insecure networks, especially as we know this is the way forward. In addition, CyberCity and certain simulations offer innovative methods of fighting cyberattacks, which we must settle for for now.

    I think that the only way to really protect industries and personal computers against cyberattacks is mandated encryption- i.e. the government would compel both corporations and individuals to install the most up-to-date protection tools. This is clearly an unpopular opinion in the security sector of the government, because it makes monitoring “bad hombres” much harder, such as international terrorist organizations. However, these groups clearly use Tor and the deep web already, and as Arquilla states in the Hersh article “today drug lords still enjoy secure internet and web communications, as do many in terror networks, while most Americans don’t”. I believe that we should bring ourselves up to the fair playing field, and find more innovative ways to track international networks in response. Krubs, O’Harrow, and Skoudis’ methods truly cannot protect us in the same way that mandatory encryption can.

  14. While I agree with the general vulnerability that others have stated regarding the public use of devices, I also think that the problem is addressable and should not only be left to the government to regulate private corporations. The government found ways to educate schools to the threat of nuclear attack. The same way those informational videos about taking cover under a desk at school from nuclear attack is the way we can approach teaching the public about cybersecurity. If there is a real perceived threat to the public, it will raise interest, which will then become a political topic in the following election cycle, raise attention from the media, and will result in more policy on increasing cybersecurity in the government, military, and public domain. Of course the problem with this approach is that it takes time and is not a guarantee. I agree with Amanda, that the only way to enforce security is a government mandate for encryption of all products sold in the U.S. While this makes the job of monitoring agencies more difficult, there are also massive implications in terms of costs.

    As Mr. Skoudis mentioned today, the Ukrainians were able to restore their power grid fairly quickly because they are used to it. Cyber security should continue to expand and training should become more rigorous for us to become the quick-acting reactionary force that can immediately correct any intrusions in our defense.

  15. Elisa succinctly summarizes the main points of these three articles, and how all three attempt to address cyber security through different fronts, and questions the feasibility of each different cybersecurity strategy. One thing that I think is applicable to all the articles, which Elisa touches upon, is the importance of communication, and allowing the general public to understand the extent of the cybersecurity risks at hand. As Amanda mentioned in her comment, the bad guys already are aware of the cyber domain, and have taken measures to protect themselves. However, ordinary Americans are unaware, or do not view cybersecurity as enough of a risk to take the extra precautions to fully cover themselves in the cyber world.

    One thing that Skoudis mentioned today that I thought was interesting, and could address the problem of communication of cybersecurity threats, is the risk vernacular. Different people, industries, companies, all have different risk vernaculars, or phrases they respond to as actual threats and concerns, and thus that warrant active responses. For example, in finance, the risk vernacular is compliance, and regulation. In utility companies, it is often safety. In toy manufacturers – children’s safety. I thought the example Skoudis gave of hacking the talking toys, and bringing the concern to the toy manufacturer was very interesting, where the manufacturer did not respond to the concern/risk vernacular that the toys could be hacked to activate their speaking remotely, whenever the hacker wanted. However, the consequences of that were that the hacker could activate the toys numerous times in a short amount of time and heat up the toy, to potentially harm the child. Then the toy manufacturer reacts. While hacking a toy is a major cybersecurity threat, there is only a response to the threat when the risk is adequately conveyed, and in the vernacular that the manufacturer cares about – child’s safety. In the future, similar to nuclear deterrence, the cyberdomain needs to become accessible to people, in order for them to understand the risks at hand. Grassroots movements, such as the one mentioned by Skoudis, could start to build up cyber threat prevention, and ingrain it into our daily lives.

  16. The third reading, “Hacked cameras, DVRs powered today’s massive internet outage” discusses the “global cleanup” that needs to occur to remove the plethora of vulnerable devices present in cyberspace. I believe that this issue will be one of the trickiest to tackle moving forward, as it relies on the compliance of a fragmented and huge industry – as well as the know-how of day to day consumers, who likely lack the incentive or time to self-protect devices that could be protected via changing default usernames and passwords.

    The first reading, “CyberCity allows government hackers to train for attacks,” is particularly compelling. In it, the virtual world designed by Ed Skoudis allows for simulations that train government hackers, in the hopes that they can “hold their own until long-term solutions can be found.” The issue being, of course, that the threats are growing at a rapid scale. For me, the reading’s largest impact was highlighting the broad range of daily risks, such as the final anecdote that closes the article. In the simulated scenario, a senior hospital doctor signs into the hospital network in an open WiFi system. Electronic medical records are accessible for the attackers – highlighting how insecure daily activities are.

  17. I agree with many of the above comments as to the reactive nature of cyber security, as well as its high potential for being overly intrusive into our everyday lives (especially as it relates to our right to privacy). One factor that I don’t believe has been discussed as much, however, is how cyber security can be preventative without infringing on our rights, in particular the training of cyber security experts and making sure we have a workforce that is well equipped to research and protect against, as well as react to cyber security threats. In his guest lecture today, Ed Skoudis briefly discussed the funneling of cyber security experts from the military and intelligence complexes to the private sector (where they are paid a much higher salary). I agree with his point that if the government is able to leverage that funnel, we may be able to create a much more literate workforce in terms of cyber security, which in turn could increase the amount of research and preventative work that is done in this area. While I am a neophyte in the realm of cybersecurity, this offers a manner through which the government can take the lead in promoting and strengthening our cyber security (again making them a prime candidate to undertake such an effort) because it has the funds and mandate to direct K-12 educational programs, research grants (through the NIH), defense research, and many other factors that could prove influential.

  18. The debate Elisa brings up over the role of industry, regulators, and consumers in cyber security is an interesting one because clearly all three should be involved in some capacity in securing this new rapidly evolving field. However, I believe the nature of cyber-space makes it difficult for any of these three actors to adequately secure technology.

    First, the sheer rate that new technology is being developed makes it near impossible for regulators to set guidelines to help ensure security. Instead, regulators are resigned to a game of perpetual catch-up as they must constantly race to keep pace with the rapid innovation and change that is inherent to hi-tech industry. Additionally, most regulation is likely suggestive at best as, as the distributed nature of the internet and software development in general makes it difficult to create a centralized regulatory body with real authority.

    Second, I believe most industry is not in a position to adequately deal with cyber-threats. Sure, large, hi-tech corporations like Google, Microsoft, and Apple probably protect your data pretty well. But exploits against even these companies’ products are constantly being revealed. And if even the most advanced hi-tech companies are vulnerable, you can only imagine all of the holes and security flaws in software by small firms. Speaking from personal experience, in high school, I worked for a small local software firm and was one day tasked with testing a website they had built. With a few malformed text entries on the site (it was sql injection for those interested), I was able to steal all sorts of personal information from users including social security numbers without any credentials and with no access to the underlying code base. And that was just me, a dumb high schooler! The problem is that many times, all it takes is one mistake to compromise an entire system, and while large companies may have the time and resources to invest in securing these flaws, small firms don’t and won’t.

    Ultimately, this hurts the consumer, the final piece of the puzzle. The problem here is that there is only so much the consumer can do to protect themselves. Consumers aren’t cyber security experts, but even if they were, they are still at the whims of the technology they interact with. Sure, we can all use some common sense, but we must ultimately rely on some level of trust that our personal information will be protected. Unfortunately, it just takes one site with one flaw to compromise all of that.

    I don’t know what the solution is. Personally, I have a set of passwords that I choose from based on my trust of a particular website, and I use discretion when asked for more personal information. But we will not always have the luxury of discretion as our lives become more and more dominated by technology. I hate to be a pessimist, but I think cyber vulnerabilities are only going to get worse.

  19. To go off of what Deirdre said though, there’s a real compromise between increasing government access to cyber-security efforts/ technology and the sort of personal liberties and freedoms that we have. During the lunch Skoudis had with some of the students doing cyber-security for their final project, he kept talking about how the government now had authorization due to some court case to hack into criminal’s homes and their home computers if they were under suspicion for some crime. So if the government takes the lead in promoting cyber security, how do they do it?

    What I’m thinking about kind of ties back into Mary DeGolian’s comment – the internet of things is so vast nowadays, with nuances in the different types of types of items that are now connected to the internet (as mentioned both in the readings and in the Skoudis lecture earlier today). There seems to be somewhat of a tradeoff between Intrusion Detection Systems and Intrusion Prevention Systems in cyber-security, where its possible for a system to just watch what’s going on in the flow of information…but as soon as a threat is detected, it would have to slow down its existing detection capabilities or shut down the entire system just to deal with the lone threat. I kind of mirror Skoudis’ pessimism regarding cyber-security in the United States and the government’s role in promoting it; given the trend of moving towards IoT, it’s really hard for the government to establish some basic standard of cyber-security between different sorts of institutions.

    The cyber-security needed to protect a financial institution is different from that needed to protect a toy. Moreover, the IPS systems between the two will be very different as well. How will the government set the resources to parse through these differences? To regulate IPS and IDS systems for these institutions? It may not sound like a big deal, but consider this: You and I probably wouldn’t argue that the free-functioning of big banks is more important than a toy. So, we would want more cyber-security for a bank than a toy. But given the size and importance of a bank, a false-positive in cyber-security tests has way more grave implications than a false-positive for a toy. So the greater the institution, the greater the need for cyber-security systems, but the greater the risk as well. Cyber-security seems to be kind of a lose lose situation.

  20. I appreciate the discussion on how best to regulate the growing “Internet of Things.” I am often inclined to believe that government regulation, even to a small degree, can be beneficial for clarifying standards or establishing a level of protection. However, I don’t think that government regulation would be productive at this stage. Skoudis explained that companies struggle to realize how the IT and the OT are connected, and he outlined why an “air gap” has become an over-utilized and increasingly misunderstood concept when dealing with IT systems. There seems to be a dearth of understanding concerning the growing inter-connectivity and its implications for cyber-security, and this gap needs to be addressed before regulations – from the private or public sector – can be reasonably enforced. Furthermore, there are issues, such as the one Mary Helen mentioned concerning the “global cleanup” of vulnerable devices, that are so broad that they appear beyond the scope of regulatory capacity.

    I think Skoudis’s work is very interesting, and lends one idea for industry self-regulation. Companies often do not understand the security implications of the fact that their hardware is connected to the Internet, and so do not design and test their hardware for security flaws that could arise if a hacker or hackers attempted to wreak havoc. If various industries agreed on a standard for testing, they could develop a certification scheme for products that have been tested for security holes. If every piece of technology were subject to rigorous testing for vulnerabilities by third parties that were licensed to perform assessments, businesses would start to think proactively about hacking threats.

    I also like Michael’s suggestions, as I think government can play an important role by facilitating greater awareness of the security implications of their consumer goods. As our readings have demonstrated, hackers only need a small entrance, e.g. through HVAC systems, to work their way into larger systems and wreak havoc. Skoudis demonstrated in his Cyber City that a doctor using vulnerable wifi in the town’s coffee shop can open hackers’ access to the hospital’s records. There is a certain burden on producers to incorporate cyber-security in the design process, but there is also an important role for consumers in understanding and mitigating risks.

  21. Elisa, you give a very interesting summary about where responsibility could lie in terms of the cybersecurity. One point that could be clarified would be your definition of cybersecurity — who is being protected and from what?
    In the first example with trained army and company personal, the protected party is either a city or company, with the aggressor being an expert with directed malicious intent. In the case of such an attack by a cyberterrorist or trained corporate saboteur, actions taken by the populace or by industry would have no effect. Experts would be the only option.
    In the second example, the focus is on consumers and their everyday habits. Keeping off unsafe sites or devices may protect consumers from low level spam, viruses, and having their credit card stolen. However, for more developed or intricate plots that go through the A/C IP to the Meat Scale IP to the cash register, nothing a consumer does can protect their personal information.
    In the last case, industry is held responsible for cybersecurity, as cheap and easily hacked items find their way onto the market. While slight shifts in industry supply could make the general population less vulnerable to having data stolen, it would likely not prevent an attack from a skilled opposition.
    To summarize, in case of a large-scale directed attack (perhaps with kinetic effects as well) trained experts are the only solution. Long term, however, these attacks could be minimized by beefing up the built-in security of government-purchased tech — a part of industry.
    In the case of preventing widespread attacks online, ones that steal credit card information or give computers viruses for example, the responsibility shifts from experts to both consumers and industry. Industry has a responsibility to their buyers to sell reasonably safe devices, while consumers have a responsibility not to be obtuse and visit sketchy sites marked dangerous by web browsers.
    The field of cyber threat is still evolving, expanding far past its original definition of electronically moving trains or hacking phone lines. So, it makes sense that cybersecurity is as well.

  22. As has been mentioned all semester with respect to cybersecurity and cyberwarfare, it is still somewhat novel. With this in mind, I want to make a few comments on the three methods of cybersecurity mentioned in the original blog post (and presented as Tuesday’s class preparation).

    First is the role of military and field leaders. Because cyber is such a new and evolving threat, so is the response and prevention necessary to prevent attacks. I find it amazing that companies, the government, the military, etc. have developed the cyber defense programs and methods that they have. To me, it seems as though maintaining a country’s cybersecurity is one of the most tireless and difficult jobs out there. If we do not even know what the dangers are–what vulnerabilities exist that hackers and terrorists can capitalize on–the how are we to protect against them? The obvious answer is through airtight code and security systems. But again, the field is progressing so quickly that the impenetrable cyber defense of yesterday is old news, and no match for the hackers of today. The only way I can wrap my head around the depth of this field is to accept that the government and military are backed by incredibly intelligent individuals who have made much further strides in the cyber realm than anyone in the public realizes. The U.S. defense system must either be more advanced than advertised, or people are not worried enough about the threats that cyberattacks pose.\

    Next is the public response to amping up personal cyber-security (which, in effect, may help to protect others by not allowing hackers to jump to and from shared networks). Some people have commented that the actions suggested by Skoudis in his talk are not realistic, but I disagree. In my opinion, the public has two choices: remain ignorant, risking their security, or put in a bit of effort and learn how to make effective moves to block breaches of their personal data, devices, etc. People love to complain, but hate to actually contribute time and effort. However, nobody has the right to whine about being targeted by hackers if they did not take the proper steps to prevent it. True, some of the “basic” measures that Skoudis talked about were a bit confusing–but he covered seven different options in under fifteen minutes. It is the public responsibility to invest whatever time they need to personally to learn and understand how to protect themselves from attacks. If this means reading a little more on the measures that Skoudis presented on their own time, then so be it. While hackers have no right to invade people’s cyber identity and cause harm, we live in a world where bad things happen and this should be expected. It is time for the public to step up and accept the mantra that “life isn’t fair” with respect to their cybersecurity. Until that happens, they will be continually taken advantage by hackers.

    Finally, I agree with the suggestion that industry standards should be developed to protect consumers from cyberattacks, when the security capability is not in their hands. As with any industry, as it becomes more advanced and popular, so must the laws and regulations surrounding it. For example, before cars were in existence, there were no laws governing the way in which people should use them. However, after their rise to popularity–and necessity–rule by rule was added to the books to keep people safe. The same template should be followed in the cyber realm. As cyber proves to play a role in another aspect of life every single day, regulations need to keep up in order to maintain the cybersecurity of everyone out there.

  23. Elisa does a great job above of summarize the main security challenges presented in this week’s reading and putting these concerns in conversation with each other. I would like to talk briefly about the reactive nature of cyber security, which was mentioned earlier in this thread, and the public response to amping up personal cyber-security that Colleene highlights.

    As my classmate Frank notes, the world of cyber development is a dynamic, highly technical and often rapidly changing field. While translating technological advancements and vulnerabilities to the public may be difficult, I believe it is vital for citizens to at least have a mildly advanced level of knowledge about these issues so that 1) they are better able to take precautionary measures to defend themselves and 2) they understand the steps the government is taking to provide cyber security.

    Yesterday in class, our guest lecturer Ed Skoudis shared a story about working on a project hacking kids toys. He told the CEO that they had hacked the doll and could make it repeat an action infinite number of times and the CEO didn’t care. However, he came back and presented this information in the “risk language” ( i believe the example was having the ability to scald a child) and his audience was much more receptive. There will be little mobilization or grassroots support behind these measures if they are presented in highly technical jargon that is out of touch with the perceptions and concerns of average citizens.

    My classmate Yasmeeen suggests that the government introduce regulation on the private sector, making the “fine print” (or risks of purchasing) more accessible. I strongly agree with this policy regulation. However, I would argue that it needs to go one step further to also provide clarity. It is not enough for the “fine print” be more accessible, it also has to be more understandable. This will help push mainstream conversations about cyber security to become more nuanced and aware of the realities, even if it as a very high level.

  24. Many of Ed Skoudis’s comments in lecture were very enlightening to me, as I will be the first to admit that I know little to nothing about cybersecurity and how a company or government can be on both the attacking and defensive end in managing the cyberspace. Since a lot of the defensive aspects have already been discussed here, I will include that my favorite part of Skoudis’s talk was hearing how the U.S. military can be better equipped to be on the attack against cyber hackers. With the strongest armed forces in the world, it seems obvious that we should hold our cybersecurity forces to an equal standard in order to be proactive and not just reactive when it comes to hacks.

    I appreciated Skoudis’s idea about creating a better incentive for hackers to continue a logical progression from serving our country for several years working on cybersecurity to then moving on and working for top private companies – which will inevitably outbid the U.S. military for employment. One of the best ways to strengthen our nation is to put our most capable and technical people at the forefront of the fight to protect our cybersecurity, and encouraging a path that starts with the military and ends with a financial raise for a big company (not unlike the original path for a pilot) will create a more educated populus and improve the military’s cyber forces in return.

  25. This post is a little late, but I want to briefly address two points that were raised in the foregoing conversation.

    One has to do with this notion of the air gap, which Ed Skoudis mentioned during his lecture and which Jeremy mentioned in his post. Israeli researchers in the past few years have found a way to extract limited amounts of data from the noise of an air-gapped computer’s fan. While this requires some device to be placed close enough to a computer to pick up the noise of the fan, it also demonstrates that there are ways to access air-gapped systems without the strict necessity of someone momentarily connecting the system to a network. One could imagine a state actor devoting humint resources to planting a few bugs in the server room of a major telecommunications company, for instance. (A preprint of the paper describing this can be found here: https://arxiv.org/ftp/arxiv/papers/1606/1606.05915.pdf.)

    The other has to do with this notion of government regulation. I want to echo in part and perhaps argue against in part what Colleen wrote in her post. Indeed, I agree that you can force companies to simplify their ToS’s and privacy policies with little effect on the end consumer’s interest in these matters. I would also add that, for companies whose revenue streams are primarily composed of the sale of consumer information, there is little incentive for these companies to radically change their privacy policies. Where I would differ in Colleen’s argument for personal responsibility is that, even if consumers were to learn more about the ways in which their information/data/hardware could be used for ill, events like state-sponsored surveillance or major coordinated cyberattacks may be considered so unlikely and/or otherwise nonthreatening that they do not affect end users’ behaviors.

    There are two paradigmatic cases to consider here, which I will only briefly gloss. First are cases of privacy. Recent revelations on NSA/CIA spying capabilities through, say, smart TVs do not appear to affect the behavior of people who use video/photo messaging applications for intimate purposes, who believe that they have “nothing to hide” or are “not doing anything illegal.” Even though people may be aware that these capabilities exist, the likelihood of (other?) “bad actors” is low enough to use these technologies for personal ends. The other, more interesting examples are cases where consumer products are used distributively to cause harm (think IoT devices used in DDoS attacks). Here, there is even less incentive for consumers to care about “personal responsibility” for the same reason people don’t generally view the fighting of climate change as a “personal responsibility:” the net contribution of each individual to the harm caused is so low (only one device among millions in a DDoS attack) that it is not considered to be a responsibility to, for instance, change the default password or switch the underlying communication protocol from telnet to something else.

  26. I would like to address a few of the points above. Andrew, Lauren, Elise and a few others discuss the need to incentivize clearer terms of service and privacy policies, and also addressed the risk of privacy violations and harmful consumer products. In terms of increasing safeguards, I agree with the above thread that ignorance is a large cause for concern. With the role that computers play in everyday life, i think that education on the basic operations of telenet and computer technology would be helpful in informing people how best to protect themselves. Skoudis has a job because of his distinct set of expertise, and even some of the smartest business people in the world are dependent on people like him to truly understand how their technology systems work.

    As a result, I would suggest that like we have driving education classes for young people, we should also have classes offered and possibly required in public schools on computer technologies. Like driving classes, in addition to the esoteric or job training context that existing computer tech classes have on an elementary level they should also have a safety aspect. While ToR and privacy statements explain the risks, and some sites and systems encourage people to change their passwords regularly or avoid downloading, people would care and be more proactive if they realized the cause and effect of their actions.

    In addition to encouraging safe behavior, a basic understanding of the risks associated with technology could put pressure on companies. Andrew talks about how the small percent of devices affected by DDoS attacks means that consumers are unconcerned, and therefore companies are not profit motivated to increase their security or change their privacy policies. One way consumers could become more connected to the issue is if there was a large enough attack to affect a higher percentage of devices, resulting in a dangerous loss of information and public outcry. However, i believe this is unnecessary if consumers were simply aware of the possibility and risk. Studies show that on issues of collective security (i.e. climate change or healthcare access), people are more willing to change or demand action with increased education. (https://blog.usaid.gov/2013/04/education-the-most-powerful-weapon/)

Leave a Reply