SuperDemonic Machines: Philosophical Exercise or Existential Threat?

“As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.” That is part of Oxford philosopher Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies. In the first two chapters we read, Bostrom details the likelihood that HLMI (and subsequently ‘’superintelligence’’) is quite near, by offering a short history of AI, descriptions of existing technologies, and expert opinions.

Bostrom makes the case that in the same way AI Superintelligence may seem impossible right now, so developments like agriculture seemed impossible to the hunters and gatherers of years past. That is an argument I’ve heard before, and one that is much less persuasive to me than a claim he makes much earlier in the chapter and demonstrates throughout; that ‘there is an ignorance of the timeline of advances’. To show this, Bostrom works through ‘the history of AI’, going through what he calls “Seasons of Hope and Despair”, from early demonstration systems in ‘microworlds’ beginning in the 50s to neural networks and genetic algorithms that created excitement in the 90s. Keenly aware of potential refutations, Bostrom notes what many have used to argue against him – every period of hope in AI has been followed by a period of despair, and the systems have often fallen short of our expectations. Human-level machine intelligence has been ‘postponed’ as we encounter issues of uncertainty, exhaustive searches, etc. Bostrom does not contradict these assertions, but he does follow that comment with a section titled “State of the Art” in which he details what machines can already do, which he notes may not be as impressive as we think only because our definition of impressive inherently changes as advances continue around us. Remarkably, expert opinions at the end of the chapter give HLMI a 90% of existing by 2100, a 50% chance by 2050, and a 10% chance by 2030. Are those estimates impressive to you? Chapter Two, “Paths to Superintelligence”, is much more technical than the first, and goes through a list of conceivable technological paths to achieve superintelligence, including AI, whole brain emulation, and biological cognitions. These different possibilities, as noted by Bostrom, increase the probability that “the destination [superintelligence] can be reached via at least one of them”. The book asserts that superintelligence is most likely to be achieved via the AI path, though it gives whole brain emulation a fair shot.

As someone who is not familiar with these technologies, reading these two first chapters was rather convincing. It was helpful that Bostrom addressed the difficulties without necessarily making them seem insurmountable, but again, with limited technological knowledge, it’s hard to tell whether Bostrom’s “will be difficult” means “impossible”. The Geist article questions whether superintelligence is really “an existential threat to humanity.” Geist clearly thinks not. Though quite abrasive – “AI-enhanced technologies might still be extremely dangerous due to their potential for amplifying human stupidity” – he makes a series of good points regarding AI and some of Bostrom’s proposed solutions (which are discussed later in the book). Geist notes that as people have begun to research AI, they have discovered fundamental limitations that, despite posing no threat to HLMI, deem ‘superintelligence’ extremely unlikely. He says that in discussions of AI, we have often conflated inference with intelligence, citing the General Problem Solver as an example. He discredits Bostrom’s idea of dealing with potential superintelligence issues by giving AI ‘friendly’ goals and keeping it sympathetic to humans, noting that even if superintelligence were an issue, it is unlikely that Bostrom’s approach to the “control problem” would work because of goal mutation. Most convincing, however, is Geist’s distinction between reasoning and other elements of human intelligence, which is what I found most absent from Bostrom’s account. Specifically, Geist notes that “While [recent] technologies have demonstrated astonishing results in areas at which older techniques failed badly—such as machine vision— they have yet to demonstrate the same sort of reasoning as symbolic AI programs.”

Though I in many instances appreciated Bostrom’s clarity, and don’t mean to say that his definitions of terms such as ‘superintelligence’ (which he defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”) are incorrect, I do think that in our discussions of these issues – especially if we subscribe to Bostrom’s rather catastrophic thesis – we must be careful to articulate what exactly it is we are scared of, in order to both estimate how likely that threat really is and strategize a good way to deal with it. As Geist notes, “Nor does artificial intelligence need to be smarter than humans to threaten our survival—all it needs to do is make the technologies behind familiar 20th-century existential threats faster, cheaper, and more deadly”. Perhaps focusing on more pressing challenges, “…such as the Defense Advanced Research Projects Agency’s submarine-hunting drones, which threaten to upend longstanding geostrategic assumptions in the near future”, is more helpful than worrying about these potentially demonic super intelligent machines (Thoughts?). While I think a fair analysis of this issue requires more technical information, I’d be interested to hear your thoughts on some of these questions:

Do you have differing opinions regarding the possibility of HLMI and superintelligence, and what do you think of Geist’s point that we sometimes conflate “inference with intelligence”? How far away do you think HLMI is (if possible), and what is the ‘path’ you find most compelling? Supposing superintelligence is possible, do you see it as a threat to humanity, and if so, how serious of a threat? Is HLMI, or even AI, a threat in itself? Why or why not? Consider the first sentence of this post – is the comparison to the relationship between humans and gorillas a good one? Most importantly, if considered a threat, what do you propose we do to deal with it? Do you see a distinction between reasoning and other cognitive capabilities, and does this change what you think about the possibilities for HLMI/superintelligence? — Maria