Should We Be Afraid of Superintelligence?

Like the overall risk assessments that we made in class, the rise of a collective, quality, or general superintelligence seems like an inevitable event, but I find it hard to wager whether or not building a general superintelligence will lead to a Skynet/Terminator situation, or exponential technological growth to the benefit of humanity. Bostrom believes the emergence of superintelligence will be upon us sooner or later, and that our main concern is to “engineer their motivation systems so that their preferences will coincide with ours” (Bostrom). How does one do that? Would this superintelligent entity have the capacity for empathy, or emotions? Would it see the slow-motion of the world around it and feel compassion or pity for its human creators (or not recognize this connection at all after a few iterations of AI existence), or see humans as we see lab rats or fruit flies in a laboratory?

The promise that an “intelligence explosion” contains needs to be evaluated alongside the risk of losing human control of said system. Building a specific motivation, behavior, or task into an AI system can backfire into real-life undesirable outcomes. One example cited where a machine completes its task but fails in its real-life objective is an automated vacuum cleaner whereby if it “is asked to clean up as much dirt as possible, and has an action to dump the contents of its dirt container, it will repeatedly dump and clean up the same dirt” (Russell, Dewey, and Tegmark, AI Magazine 108, referring to Russell and Norvig, 2010). Other classic examples speak of a paper-clip making robot harvesting the world’s metal to create a global supply of paperclips, destroying the world in the process. Similar to Bostrom’s concerns, Russell, Dewey, and Tegmark note the difficulties or absurdity in the idea that an AI could understand law or more nuanced standards that guide human behavior. Supposing that robots could process the laws themselves, these laws rely on an interpretation that includes “background value systems that artificial agents may lack” (Russell, Dewey, and Tegmark, 110).

If we apply these worries to a superintelligence scenario, are we really facing a dystopian world? Perhaps it depends on the type of superintelligence. Whether speed, collective, or quality, all three as described do not define one type or the other as more likely to contain or comprehend human values or at least a respect for life. Rather, the focus here is on output, speed, and cleverness. In place of morals, we have instead use the term “preferences.” Would there ever be a way to make humans the top preference, or would a quality superintelligence see through that in a nanosecond, and reject it in preservation of its own system? Even if we as a society try to prevent an intelligence explosion, per Ackerman’s argument over AI weaponry, it may be a slow, but inevitable march toward this reality given the lack of barriers to entry. On a more separate note, I am curious as to how one characterizes Ava from Ex Machina then, if she is, say, a quality superintelligence. Would such a machine insidiously blend in with society, ie play by human societal rules until it can take over? The converse would be Cyberdyne Systems’ launch of Skynet, and the resulting war between human and machine. As scarily effective Ava’s manipulation of Caleb’s emotions was, I would still prefer that kind of AI to the Terminator. Are only humans capable of morality or compassion, or is there a way to encode it, or create a pathway for AI to develop it themselves? — Nicky