SuperDemonic Machines: Philosophical Exercise or Existential Threat?

“As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.” That is part of Oxford philosopher Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies. In the first two chapters we read, Bostrom details the likelihood that HLMI (and subsequently ‘’superintelligence’’) is quite near, by offering a short history of AI, descriptions of existing technologies, and expert opinions.

Bostrom makes the case that in the same way AI Superintelligence may seem impossible right now, so developments like agriculture seemed impossible to the hunters and gatherers of years past. That is an argument I’ve heard before, and one that is much less persuasive to me than a claim he makes much earlier in the chapter and demonstrates throughout; that ‘there is an ignorance of the timeline of advances’. To show this, Bostrom works through ‘the history of AI’, going through what he calls “Seasons of Hope and Despair”, from early demonstration systems in ‘microworlds’ beginning in the 50s to neural networks and genetic algorithms that created excitement in the 90s. Keenly aware of potential refutations, Bostrom notes what many have used to argue against him – every period of hope in AI has been followed by a period of despair, and the systems have often fallen short of our expectations. Human-level machine intelligence has been ‘postponed’ as we encounter issues of uncertainty, exhaustive searches, etc. Bostrom does not contradict these assertions, but he does follow that comment with a section titled “State of the Art” in which he details what machines can already do, which he notes may not be as impressive as we think only because our definition of impressive inherently changes as advances continue around us. Remarkably, expert opinions at the end of the chapter give HLMI a 90% of existing by 2100, a 50% chance by 2050, and a 10% chance by 2030. Are those estimates impressive to you? Chapter Two, “Paths to Superintelligence”, is much more technical than the first, and goes through a list of conceivable technological paths to achieve superintelligence, including AI, whole brain emulation, and biological cognitions. These different possibilities, as noted by Bostrom, increase the probability that “the destination [superintelligence] can be reached via at least one of them”. The book asserts that superintelligence is most likely to be achieved via the AI path, though it gives whole brain emulation a fair shot.

As someone who is not familiar with these technologies, reading these two first chapters was rather convincing. It was helpful that Bostrom addressed the difficulties without necessarily making them seem insurmountable, but again, with limited technological knowledge, it’s hard to tell whether Bostrom’s “will be difficult” means “impossible”. The Geist article questions whether superintelligence is really “an existential threat to humanity.” Geist clearly thinks not. Though quite abrasive – “AI-enhanced technologies might still be extremely dangerous due to their potential for amplifying human stupidity” – he makes a series of good points regarding AI and some of Bostrom’s proposed solutions (which are discussed later in the book). Geist notes that as people have begun to research AI, they have discovered fundamental limitations that, despite posing no threat to HLMI, deem ‘superintelligence’ extremely unlikely. He says that in discussions of AI, we have often conflated inference with intelligence, citing the General Problem Solver as an example. He discredits Bostrom’s idea of dealing with potential superintelligence issues by giving AI ‘friendly’ goals and keeping it sympathetic to humans, noting that even if superintelligence were an issue, it is unlikely that Bostrom’s approach to the “control problem” would work because of goal mutation. Most convincing, however, is Geist’s distinction between reasoning and other elements of human intelligence, which is what I found most absent from Bostrom’s account. Specifically, Geist notes that “While [recent] technologies have demonstrated astonishing results in areas at which older techniques failed badly—such as machine vision— they have yet to demonstrate the same sort of reasoning as symbolic AI programs.”

Though I in many instances appreciated Bostrom’s clarity, and don’t mean to say that his definitions of terms such as ‘superintelligence’ (which he defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”) are incorrect, I do think that in our discussions of these issues – especially if we subscribe to Bostrom’s rather catastrophic thesis – we must be careful to articulate what exactly it is we are scared of, in order to both estimate how likely that threat really is and strategize a good way to deal with it. As Geist notes, “Nor does artificial intelligence need to be smarter than humans to threaten our survival—all it needs to do is make the technologies behind familiar 20th-century existential threats faster, cheaper, and more deadly”. Perhaps focusing on more pressing challenges, “…such as the Defense Advanced Research Projects Agency’s submarine-hunting drones, which threaten to upend longstanding geostrategic assumptions in the near future”, is more helpful than worrying about these potentially demonic super intelligent machines (Thoughts?). While I think a fair analysis of this issue requires more technical information, I’d be interested to hear your thoughts on some of these questions:

Do you have differing opinions regarding the possibility of HLMI and superintelligence, and what do you think of Geist’s point that we sometimes conflate “inference with intelligence”? How far away do you think HLMI is (if possible), and what is the ‘path’ you find most compelling? Supposing superintelligence is possible, do you see it as a threat to humanity, and if so, how serious of a threat? Is HLMI, or even AI, a threat in itself? Why or why not? Consider the first sentence of this post – is the comparison to the relationship between humans and gorillas a good one? Most importantly, if considered a threat, what do you propose we do to deal with it? Do you see a distinction between reasoning and other cognitive capabilities, and does this change what you think about the possibilities for HLMI/superintelligence? — Maria

2 thoughts on “SuperDemonic Machines: Philosophical Exercise or Existential Threat?

  1. Thanks for your post Maria. You do a great job of complicating Bostrom’s thesis through the lens of Geist, and of raising several interesting questions about the significance and probability of developing HLMI. I too found Bostrom’s comparison our relationship to the potential creation of HLMI to pre-agriculture hunters and gatherers rather unconvincing. While I understand the point he is making about humanity’s historical ignorance with regard to future advancements, one could use such an argument to claim the impending creation of any particular technology, even if such speculations are not rooted in hard evidence.

    I think that I come out more on the side of Geist in believing that there are many fundamental differences between artificial intelligence created through constant iteration and human intelligence created through lived experience. While machine learning, as we have read and seen in lecture, has proven wildly effective and even surpassed human abilities within rigid structures of games such as Chess and Go, real human understanding and consciousness involves far more complexity than binary analysis of past failures and successes. While this human reasoning certainly makes us more prone to random error, the nuances inherent to consciousness cause me to view development of HLMI as incredibly difficult, if not outright impossible.

    All this being said, I remain of the belief that many forms of advanced AI present major challenges and dangers to society and need to be treated with great care. While I don’t think many would argue that the “Slaughterbots” from the lecture video represent successful implementation of HLMI, they nonetheless were shown to greatly threaten peace and security, with these threats being all the more pressing as many of the technologies shown are already in existence. I therefore believe that it is both more practical and crucial to focus our efforts in regulating existing developments in AI (such as the submarine-hunting drone example that you raise) which could pose significant threats in the near future and are more readily visible than the hypothetical creation of HLMI farther down the road.

    *Note: In attempting to submit this post at 8:55am, I accidentally refreshed the page and deleted my entire response. I was therefore made to rewrite the entirety of the post by memory and submit after the deadline of 9:00am. I will be sure to write out my responses in Word from now on rather than leaving the fate of my blog post up to the whims of Google Chrome.

  2. AI can be categorized as either narrow (focus on a specific task, task-specific solutions) or general (usable for any cognitive task, general / adaptive intelligence). We’ve made a lot of progress on the narrow AI front (AlphaGo, autonomous vehicles, etc.) but not much progress on artificial general intelligence (AGI). However, it is AGI that is subject to much of the public’s excitement and hysteria (and what movies are made about). Bostrom’s notion of an artificial superintelligence (ASI) falls in the AGI category.

    There are a few blockers on the development of an ASI. First, it is incredibly difficult to emulate the emotional behavior of humans. A successful AI does not think like a human; rather, it is an alien intelligence. For example, AlphaGo’s playing style has been described as un-human. It does not make moves that a human would make. This is important because so long as an AI does not develop human-like thinking and emotional behavior, it is never going to want to do anything outside of what it has been programmed to do. Second, there are limitations (technical resources, engineering effort, data, time, etc.).

    However, the development of an ASI looms, and many prominent technologists and scientists have warned about its impending threat to society. Bostrom brings up a hypothetical paperclip maximizer scenario. It is an example of how an AGI designed with innocuous values could eventually destroy humanity. The example goes like this… if the goal of an AI is to make as many paperclips as possible at the lowest cost, it will be maniacal about achieving its goal. Eventually, it will turn to humans as a source of aluminum. And even if there is an off button, the AI will do everything in its power to prevent us from accessing it because doing so would obstruct its goal of making paperclips. We need to ensure that from the very beginning, the goals we give an AI align with human values.

Leave a Reply