Should We Be Afraid of Superintelligence?

Like the overall risk assessments that we made in class, the rise of a collective, quality, or general superintelligence seems like an inevitable event, but I find it hard to wager whether or not building a general superintelligence will lead to a Skynet/Terminator situation, or exponential technological growth to the benefit of humanity. Bostrom believes the emergence of superintelligence will be upon us sooner or later, and that our main concern is to “engineer their motivation systems so that their preferences will coincide with ours” (Bostrom). How does one do that? Would this superintelligent entity have the capacity for empathy, or emotions? Would it see the slow-motion of the world around it and feel compassion or pity for its human creators (or not recognize this connection at all after a few iterations of AI existence), or see humans as we see lab rats or fruit flies in a laboratory?

The promise that an “intelligence explosion” contains needs to be evaluated alongside the risk of losing human control of said system. Building a specific motivation, behavior, or task into an AI system can backfire into real-life undesirable outcomes. One example cited where a machine completes its task but fails in its real-life objective is an automated vacuum cleaner whereby if it “is asked to clean up as much dirt as possible, and has an action to dump the contents of its dirt container, it will repeatedly dump and clean up the same dirt” (Russell, Dewey, and Tegmark, AI Magazine 108, referring to Russell and Norvig, 2010). Other classic examples speak of a paper-clip making robot harvesting the world’s metal to create a global supply of paperclips, destroying the world in the process. Similar to Bostrom’s concerns, Russell, Dewey, and Tegmark note the difficulties or absurdity in the idea that an AI could understand law or more nuanced standards that guide human behavior. Supposing that robots could process the laws themselves, these laws rely on an interpretation that includes “background value systems that artificial agents may lack” (Russell, Dewey, and Tegmark, 110).

If we apply these worries to a superintelligence scenario, are we really facing a dystopian world? Perhaps it depends on the type of superintelligence. Whether speed, collective, or quality, all three as described do not define one type or the other as more likely to contain or comprehend human values or at least a respect for life. Rather, the focus here is on output, speed, and cleverness. In place of morals, we have instead use the term “preferences.” Would there ever be a way to make humans the top preference, or would a quality superintelligence see through that in a nanosecond, and reject it in preservation of its own system? Even if we as a society try to prevent an intelligence explosion, per Ackerman’s argument over AI weaponry, it may be a slow, but inevitable march toward this reality given the lack of barriers to entry. On a more separate note, I am curious as to how one characterizes Ava from Ex Machina then, if she is, say, a quality superintelligence. Would such a machine insidiously blend in with society, ie play by human societal rules until it can take over? The converse would be Cyberdyne Systems’ launch of Skynet, and the resulting war between human and machine. As scarily effective Ava’s manipulation of Caleb’s emotions was, I would still prefer that kind of AI to the Terminator. Are only humans capable of morality or compassion, or is there a way to encode it, or create a pathway for AI to develop it themselves? — Nicky

27 thoughts on “Should We Be Afraid of Superintelligence?

  1. “Are only humans capable of morality or compassion, or is there a way to encode it, or create a pathway for AI to develop it themselves?” The assumption underlying this statement and the concept of super-intelligence development overall is that we will eventually be able to develop an entity that supersedes humans. Well, to be able to construct an entity that supersedes humans in all aspects of existence, computational and moral/psychological, we need to be able to replicate and improve upon human emotion, memory, and reasoning. What we have only achieved so far is replicate and supersede human analytical power with computers. And since the human brain has not been fully mapped and the mechanisms underlying thought and emotion and how those two interplay have not been fully deciphered, I really doubt super-intelligence will be developed, or at least any time soon.

    The best we can do so far and I claim will be able to do in the next 60 years is just advanced machine learning algorithms. And do not get deceived, machine learning algorithms are just building upon existing data sets and classifications and are not characterized by any sort of computer initiative that would be required for not only a machine to escape human control, but more importantly to even be able to perceive the need to escape human control.

    Skynet and Ava are scary, but for the foreseeable future, they are just fiction. Humans cannot replicate humans so there is no way they can replicate something superior to them in all aspects of existence.

  2. The most important idea to take from Bostrom’s article on superintelligence is the need for preparation. Nicky’s blog post raises many important questions – questions that must be dealt with before building a superintelligence. As Bostrom notes in his article, once created, a superintelligence has the capacity to create an exponential explosion in research and learning. These tasks will be performed by the superintelligence itself, which will drive progress and advancement to unforeseen heights. This AI system would quickly become much more powerful than humans, and as such, it is critical that we prepare to protect ourselves. We must design a system that can be embedded in a superintelligence to ensure “that their goals coincide with ours.” The order of these actions is very important, for without adequate preparation, we will be left dangerously open and vulnerable to a powerful superintelligence.
    An even more difficult question emerges when we consider how to create an embedded system that will protect our interests in the long-term. We may succeed in initially designing a superintelligence that is amicable to our own goals, but how can we ensure the continuity of this system as the superintelligence evolves to new levels that we cannot comprehend? If this question cannot be answered, the possible risks may outweigh the rewards. If we cannot adequately prepare for the creation of a superintelligence, perhaps we should work to ensure that one is never created.

  3. I think it’s intriguing that up until this point in the course, all of the global security concerns we have discussed have been inherently dangerous. For example, we discussed nuclear weapons, epidemics, cyberwarfare—all, by definition, threatening to human life and potentially to humanity both now and in the future.

    In contrast, there is nothing inherently sinister about artificial intelligence. As the Russel (2015) piece points out, AI has the potential to bring “unprecedented benefits to humanity.” Further, the benefits reaped through AI development and use will be numerous and far-reaching before we have even developed the capability of creating a superintelligence. While Bostrom argues that prepping for the transition to machine intelligence is critical for our time, he also notes that superintelligence is still beyond the capabilities of present and near future systems. Decades or more may pass before we (or other computers) have established superintelligence generating capabilities.

    The challenging, thought-provoking questions presented in Nicky’s post are examples of AI concerns that may simply be too advanced or convoluted for any scientist, much less an everyday citizen, to consider. I overall have a hard time engaging with these questions because they feel both too abstract and too irrelevant considering our current lack of superintelligence capabilities. In addition, the benefits of these technologies so far outweigh the long-term concerns; it would be a pity if public fear and confusion over AI technologies was a barrier to critical research and development in this area.

  4. Twenty-three years before Nick Bostrom was born, in 1950, science fiction writer Isaac Asimov was considering this same dilemma—how to protect humanity while benefiting from AI’s. In his seminal work, I, Robot he outlined three laws governing robots: first, that robots could not harm humans, second, that they must obey humans, and third, that they must protect themselves, with the first prioritized over the others.
    However, Asimov did not attempt to explain how those values were imprinted in the robots’ “minds.” I agree with Elisa that concerns regarding how to create a value system for a superintelligence that coincides with our own is not relevant yet. Given that we have no concept of what a superintelligence system would look like, there is no way we can design controls for it. This would be akin to attempting to design a car’s braking system decades before the first car was built. We may discuss which values we wish superintelligence systems to have, as Asimov did, but to delve into the technicalities of how to encode those values seems fruitless.
    Still, this is not to say that we should dismiss Bostrom’s concerns. As our technologies approach the ability to build artificial intelligence that is superior to our own, we must consider how to protect humanity and derive the greatest benefits from that artificial intelligence. This will definitely include building the AI so that its “preferences” match with our own. Ava frightens me more than Skynet because, while Skynet was designed to be a weapons system, Ava was not, Still, she became deadly because of her creator’s decision to create a superintelligence before he figured out how to design a system of values for it.

  5. This topic is so fascinating because, unlike many of the other topics that we have discussed in this course, AI and super-intelligence ends up being much more theoretical which makes the possibilities almost endless for what it can become. At the same time, the fact that no one really knows what this development could become leads to intrigue and many doomsday scenarios.

    I am familiar with this idea of a technological singularity – a rapid technological growth in artificial super-intelligence that will supersede that of human civilization. And, for a while, I was stunned at the different trends and possibilities. However, I tend to agree with Yannis that there are limits to exactly how far these theories can go. We are still so far away from a singularity or some sort of AI takeover, and the fact of the matter is that these technological innovations do not seem to get exponentially more advanced forever. Just with any case of revolutionary invention, there tends to be an inevitable plateau in progress. And I believe that we are already starting to see that plateau. There has been a bit of a standstill in what had been the ever-increasing capabilities and computing powers of many devices.

    That is not to say that this should not be a concern. As Coy says, I agree that preparation would be helpful. Beyond preparation, perhaps even extreme care needs to be taken in regards to the creation of new technologies. Elisa is right that there is nothing necessarily sinister about these advancements, and much of it exhibits even a great net positive for humankind. However, the risk still exists for something to go wrong. Yet, I feel that fears of some kind of fictitious artificial super-intelligence Armageddon are a bit exaggerated and, for now, the risk seems to be minimal at best.

  6. In his post, Nicky asks, how do we engineer superintellence so that “their preferences will coincide with ours?” To me, the only conceivable solution is to fuse together biological intelligence with digital superintelligence. This solution calls for implanting a computer chip with high-band interface into the human brain to create what is essentially a cyborg. If Bolstrom’s “speed superintelligence” is incorporated into the human brain, brain function would accelerate exponentially while maintaining emotional and biological intelligence.

    Though this idea seems like would only be found in sci-fi books, Elon Musk is already working it. Musks new project, Neuralink, seeks to combine human and artificial intelligence through AI inserts into the brain’s cortex to advance cognitive reasoning.

    In an interview responding to Musk’s launch of Neuralink, Bostrom revealed his skepticism about the project, explaining that the way to control computers is not by plugging electronics into our brains. Though there could be strong medical applications for this type of AI insert (ie. Parkinson’s, epilepsy), it is far too difficult to enhance the capabilities of a healthy human brain.

    Whatever the limits of this technology may be, it’s important that Musk and Neuralink are beginning to exploring the possibilities of AI control.

  7. I think there is, without a question, that super-intelligence will be upon us within the foreseeable future, and that plans must be made to program the supper intelligent AI in order to guarantee that it will not threaten the safety of the human species. The, the relevant question, as Nicky poses it, is whether there is a way to make humans the “top preference” in order to keep their interests above those of the event.

    In Bostrom’s analysis, he talks about the “Control Problem” which searches for countermeasures to the security threat posed by AI. This is broken into two components: the “Value Loading Problem” (how to load values into the AI systems) and “Choosing the Criteria for Choosing” (choosing the values or value system that will inform the AI’s decisions and actions). However, Bostrom does overlook the fact that for any potential morality model, the AI system must give humans sufficient “moral status” for the values that are applied to us. Moral status is a philosophical concept that, if possessed, gives the entity equal interests regarding moral questions. For example, humans hold other humans to a high degree of moral status, and thus humans believe killing other humans is wrong. However, a chicken perhaps may not hold the same moral status, and for that reason, many humans have no qualms of the mass killing of chickens.

    In much the same way, even if we encode a system of morality into the AI, it may not believe that the values apply to us, because it does not accord us sufficient moral standing. The importance of programming sufficient moral status of humans into these super intelligent AIs can be understood in the context of the “paperclip problem” Nicky mentions, where values coded into the robot can still be turned on humans if AI do not give humans sufficient moral status. Therefore, this is an important consideration in the context of security, apart from a simple value system. (Outside the scope of this class, but another interesting consideration, is the rethinking of meat consumption by humans because humans are “more intelligent” than farm animals. If this is the case, shouldn’t super-intelligent AI be able to subject human interests to their own because of their superior intelligence?)

  8. Nicky’s post starts by raising the correct questions—that is, whether the inevitable rise of advanced machine learning and super intelligent systems will bring harm or benefit to humankind. The impulse to side with the former vision, which leads to mass unemployment at best and Skynet at worst, I think is illogical and born out of greater feelings of insecurity than necessary, and on an assumption that super intelligence automatically qualifies machines for every task and employment over humans.

    Take, for example, the Starbucks or Small World barista. Their function of making coffee for customers could easily be replicated by a machine or series of machines, which would likely do it better. (Baristas already use machines to help make coffee, and their function of integrating machines in the right order could easily be programmed.) The ingredients could be added in more precise proportions; customer input, coupled with the application of machine learning algorithms on stored feedback data, would allow these machines to create far better cups of coffee than any barista could ever hope to. Yet baristas will remain a part of our (coffee) culture because of the value the market places on having a human add an artisanal touch to our coffee, concurrently an implicit and explicit nod to an experience we prefer over better labor saving technology. Neither a speed, nor a quality, nor a collective super intelligence has succeeded in changing that yet, and I see no reason to think it will any time soon.

    Moreover, a fear of super intelligence mischaracterizes the fundamental differences between humans and computers—namely, that we set our own rules for ourselves. Whereas communities of power of course exist, and societal institutions have been created to impose regulations and laws that restrict the effective power of human malice, there is still little to stop me from strangling another person on a whim save their physical ability to resist and others to intervene around me, if I am of the mind to do it. Abstract laws mean nothing in the moment. However, permanent rules and conditional checks can be made explicit in any algorithm, which a computer will follow to a fault, no matter their level of “intelligence.” Ackerman makes a nod to this point when he discusses a robotic, intelligent soldier withholding engagement with hostile forces until explicitly instructed. They will not react to the fear unless programmed to do so, or given the means to learn it.

    Right now, humans have an ability to play God and create intelligence bounded permanently by morals. Whereas Adam, by virtue of his intelligence, and through the folly of man, was able to defy God and eat the apples of Eden, programmers can make it impossible for a computer to even consider the possibility, to think about the consequences, and develop temptation or malice. Super intelligence stands only to add value where we humans—through society and law and the marketplace—see it as fit to add value, and so long as it is hosted on silicon platforms can be made to obey rules and principles better than any human. Anxiety and fear ought to give way to excitement at this new technology, which stands solely at our disposal, under our control, and for service—not the other way around.

  9. Like Elisa and Chris, I find many of these questions about superintelligence to be vague and far-fetched. We need to take a step back and examine the current issues surrounding AI and ML to address the future. As we know, humans make machines, and thus impart their flaws and biases into technology. Current problems with ML and big data stem from biased training data sets and biased users, from using data to persecute already vulnerable communities to discrimination in online ad delivery. Do we really need to make machines more human-like and have human values if humans are inherently flawed and biased? Perhaps we need to work on our own empathy before we start trying to give robots empathy.

  10. I’m not sure if I quite agree with what Peter said above. The idea that we are the masters and can control everything a computer does is in direct contrast to those around us sounding the alarm about how AI may spell the end for humanity- from Hawking to Gates to the authors we read for this weak. Engaging with this abstract and theoretical material is a bit challenging given my total lack of knowledge in this field, but beyond programming AI to do something devastating, it’s possible, I suppose, that AI can carry out its desired goal in a destructive fashion. That is to say, while the AI may not have any malevolent or destructive intent since it lacks “temptation or malice”, that may not be necessary because it can use destruction to achieve its pre-ordered aims. In my base conception, I would keep AI relatively ‘dumb,’ with painstaking mapping of the exact steps needed to achieve its goals. I would not allow for a wide latitude in decision making of AI systems because then they may decide against us, whether knowingly or unknowingly.

  11. This is an extremely interesting topic for discussion. I believe that in the majority of the world population’s minds, the thought of an artificial intelligence or superintelligence takeover is nothing more than a movie plot that became popular in the eighties. However, those knowledgable on the rapidly advancing technological systems and machine learning would probably share a more worrisome viewpoint.
    I do not think that the public truly understands the capabilities of computers today and in the future. For example, I did not know that machine learning and artificial neural networks had the ability to progress in the way that they currently do, and I would like to think that I am relatively up to date on technological advancements. Personally, I definitely have a fear of uncontrollable AI. While it may not happen today, tomorrow, or in the near future, with computer systems able to teach themselves more and more without any human intervention, there is definitely a danger.

    On this note, I am questioning something that Yasmeen mentioned when speaking to the class. She noted that computers are only as impartial as the person who programmed them. This means that whatever basic code a program builds off of will carry the implicit bias of its programmer, whether intended or not. Any program that uses some kind of subjectivity or attempt at objectivity stemming from a personal viewpoint will reflect amplified versions of this in its final stages, especially in the case of machine learning and neural networks, which will repeat/amplify the bias a bit with each step. This has the potential to be comforting or extremely frightening. Someone with good intentions may thus cause the program to have a harmless bias written into it; however, poor intentions could certainly do the opposite. I will keep an eye on updates in this realm, and would not be surprised if we see some kind of self-driving, uncontrollable technology development in our lifetimes.

  12. Like Yannis and Collin, I tend to agree that superintelligence of the type to threaten the existence of humanity is not much of a worry. I might go as far as to say that this kind of superintelligence is impossible. The landmark results in metamathematics and computability theory of the first half of the 20th century (Godel incompleteness and Tarski undefinability being the ones most familiar to the popular imagination) give good reason to believe that there are a lot of limitations to what formal systems can achieve. If our posited superintelligence relies upon such an axiomatized formal system, then I see no reason to believe it could rival human ingenuity to the point where it could threaten human existence. And even if these types of results have little bearing on whatever model upon which putatively superintelligent systems are built, I would be willing to bet that we could find similar types of results or phenomena pertaining to this model. There may be physical constraints to building such systems, and as Yannis mentioned, we don’t understand human cognition or neuroscience well enough to be able to replicate it (though, perhaps, one could claim that we could replicate it independently of understanding it). Indeed, to meditate in a more speculative direction, some could argue that deep, generative rationality will always remain within the realm of human endeavor. See, for some slightly outdated examples, Chomsky on linguistics (https://www.cambridge.org/core/books/cartesian-linguistics/899ACCD2F576B94CDFE15BB528F37E50), motivated by Descartes’ thoughts on automata (https://www.gutenberg.org/files/59/59-h/59-h.htm).

    Peter also raises the point that there may not be demand for such a superintelligent system. Though I am of the mind that market forces are largely irrelevant to projects with both theoretical import and defense applications, I would agree with this claim. Bostrom mentions, for instance, the development of rudimentary proof systems that could, for instance, verify portions of Russell & Whitehead and “[come] up with one proof that was much more elegant than the original.” Besides the fact that much of Russell & Whitehead is already essentially written in programmable code, the question of computer-aided proof verification is fairly controversial in the mathematical community. Bill Thurston wrote a short essay (https://arxiv.org/pdf/math/9404236.pdf) in 1994 saying that Appel & Haken’s computational proof of the four-color theorem was unsatisfying because there is a “continuing desire for human understanding of a proof.” And if the mathematical community views this kind of human effort, which inevitably involves human flaws and imperfections, as central to its endeavors, this would certainly be the case in other, more major and important domains of human life. Zizek on sex (https://www.youtube.com/watch?v=7xYO-VMZUGo) and the idea of the uncanny valley (https://en.wikipedia.org/wiki/Uncanny_valley) illustrate this point nicely.

    One could also ask questions about the material conditions in which superintelligent systems could be developed (somewhat alluded to in Peter’s barista example). Bostrom discusses the growth modes which allow technological development, and I will leave open as a question whether a society with the resources to develop a superintelligent system will either be impossible or have certain structural features that either render the superintelligent system harmless. Here’s something to think about, for instance: would such a society likely operate under post-scarcity conditions which make communist economies feasible? If so, given the tendency of far-left sympathizers to eschew utilitarian-type moral systems, would superintelligent systems use utility functions to encode human moral standing?

  13. The fears of AI and superintelligence lie in the “quality superintelligence” that is “vastly qualitatively smarter” than the human mind (Bostrom 2014). It is this kind of superintelligence that surpasses our cognitive abilities in the way that humans fear most: it is not the speed of computation power or the collective computation power that is most threatening to mankind, but the notion that a system will be able to conduct abstract thinking, linguistic representation, and long-term planning better than humans. This quality superintelligent system will be to humans what we are to hamsters, and it is the kind of superintelligence around which dystopian sci-fi movies and books revolve.

    A “simple” solution would then appear that scientists and policymakers should prevent the creation of quality superintelligences that have abstract thinking abilities. However, even the smallest form of superintelligence – a mere speed or collective one – can quickly build and equip itself with the faculties necessary to become a formidable quality superintelligence. This uncontrollable self-teaching (and potential self-arming) adds to the necessity and urgency to restrict the development of superintelligence.

    Thus, the current technological plateau that Collin references will quickly become extinct once AI is in its infancy, as progress and intelligence will grow exponentially. As Bostrom writes, we would “go from there being no computer that exceeds human intelligence to machine superintelligence that enormously outperforms all biological intelligence.”

    Although these superintelligent systems seem almost unfathomable in the immediately foreseeable future, they will quickly become a reality once an unknown AI threshold is passed, so their implications and restrictions should be considered in the present. Coy’s points about the preparation for – and potentially even prevention of – the advent of superintelligence is vital at the current moment because if we bring them into this world (even unintentionally), will be able to take them out of it?

  14. (If the Zizek link in my previous comment truly poses a problem for moderation, please use this comment instead.)

    Like Yannis and Collin, I tend to agree that superintelligence of the type to threaten the existence of humanity is not much of a worry. I might go as far as to say that this kind of superintelligence is impossible. The landmark results in metamathematics and computability theory of the first half of the 20th century (Godel incompleteness and Tarski undefinability being the ones most familiar to the popular imagination) give good reason to believe that there are a lot of limitations to what formal systems can achieve. If our posited superintelligence relies upon such an axiomatized formal system, then I see no reason to believe it could rival human ingenuity to the point where it could threaten human existence. And even if these types of results have little bearing on whatever model upon which putatively superintelligent systems are built, I would be willing to bet that we could find similar types of results or phenomena pertaining to this model. There may be physical constraints to building such systems, and as Yannis mentioned, we don’t understand human cognition or neuroscience well enough to be able to replicate it (though, perhaps, one could claim that we could replicate it independently of understanding it). Indeed, to meditate in a more speculative direction, some could argue that deep, generative rationality will always remain within the realm of human endeavor. See, for some slightly outdated examples, Chomsky on linguistics (https://www.cambridge.org/core/books/cartesian-linguistics/899ACCD2F576B94CDFE15BB528F37E50), motivated by Descartes’ thoughts on automata (https://www.gutenberg.org/files/59/59-h/59-h.htm).

    Peter also raises the point that there may not be demand for such a superintelligent system. Though I am of the mind that market forces are largely irrelevant to projects with both theoretical import and defense applications, I would agree with this claim. Bostrom mentions, for instance, the development of rudimentary proof systems that could, for instance, verify portions of Russell & Whitehead and “[come] up with one proof that was much more elegant than the original.” Besides the fact that much of Russell & Whitehead is already essentially written in programmable code, the question of computer-aided proof verification is fairly controversial in the mathematical community. Bill Thurston wrote a short essay (https://arxiv.org/pdf/math/9404236.pdf) in 1994 saying that Appel & Haken’s computational proof of the four-color theorem was unsatisfying because there is a “continuing desire for human understanding of a proof.” And if the mathematical community views this kind of human effort, which inevitably involves human flaws and imperfections, as central to its endeavors, this would certainly be the case in other, more major and important domains of human life.

    One could also ask questions about the material conditions in which superintelligent systems could be developed (somewhat alluded to in Peter’s barista example). Bostrom discusses the growth modes which allow technological development, and I will leave open as a question whether a society with the resources to develop a superintelligent system will either be impossible or have certain structural features that either render the superintelligent system harmless. Here’s something to think about, for instance: would such a society likely operate under post-scarcity conditions which make communist economies feasible? If so, given the tendency of far-left sympathizers to eschew utilitarian-type moral systems, would superintelligent systems use utility functions to encode human moral standing?

  15. This topic immediately brought to mind an interesting dilemma I was introduced to while reading about the latest technologies coming out of Silicon Valley for autonomous vehicles. I read that recently a company had designed a machine learning program that taught itself how to drive based off of watching thousands and thousands of hours of humans doing it. The article went on to discuss how this was different than human learning (going off the basics of machine learning as discussed in lecture & the readings), and why it meant that it was basically impossible for any human to know why the car’s AI acted in the manner that it did.

    This implies that as AI’s access more and more data and create more and more algorithms for understanding how to take actions, they will surpass human understanding, just Bostrom and the other predict. But rather than approaching it from a values point of view as Bostrom wrote about, the fact that I read this article when we were still reading about nuclear issues in class got me thinking: is there away to ensure that these super-intelligent AI’s could only be allowed to access physical machines which have value systems we can surely control? That is to say, while we may create artificial intelligence much more powerful than us in thinking capabilities, if we limit its access to the physical world via machines and systems that we can create with inherent values into them that the superintelligence can in no way change, then we have effectively limited it. Not only that, but the fact that these machines would be limited would allow us to research why the superintelligence may have attempted to violate one of the values we encoded into the physical-interacting machines.

    If superintelligence has the potential to be so dangerous to humans, as seems likely given the concerns expressed by Russel et.al and Bostrom, then it becomes an immediate concern for policy makers to regulate the access these machine learning AI have to physically interacting machines.

  16. As Coy commented, Bostrom underscores the need for preparation in terms of artificial intelligence. I do not believe that the way moving forward should be to prohibit analytical abilities or abstract thought, as that limits beneficial scientific advancement. However, I do think there needs to be more oversight of this field. A study conducted in 2014 by Google-owned AI lab DeepMind and the University of Oxford sought to create a framework for handing control of AI experiments over to humans, i.e. a “big red button”. I think that this kind of control would help implement the safeguards that are needed in projects of artificial intelligence, but without limiting scientific experimentation within AI experiments. I guess the real fear here is an iRobot scenario where they learn beyond the “big red button” and destroy humans in preservation of their own system, as Nicky mentioned.

    The issue in this scenario is the timing and exponential growth of AI. As Delaney discussed, there is an uncontrollable self-teaching aspect of AI that is very distinct from the learning methods of humans. The graph presented in Bostrom that shows the explosion of world GDP highlights that “another drastic change in growth mode is in the cards”, possibly in the form of AI. That means that implementing institutional safeguards will almost constantly be behind the pace of AI, therefore rendering it ineffective especially in conjunction with the uncontrollable self-teaching aspect. This is why a “big red button” or a framework for human control in all AI processes are highly necessary because they are proactive not retroactive like institutional safeguards.

    I just wanted to touch quickly on “preferences” versus “morals”- can they ever really replace one another? Human life is nuanced and full of a lot of grey area that cannot be covered by ones preferences. Often, doing the right thing requires a series of assumptions based on an experiential moral code that may not be embedded in AI. I find this to be the most frightening aspect of AI, the seeming lack of nuance coded into these machines.

  17. I think there’s an additional moral question here, which complicates our ability to deal with any rogue AI problem that we may create in the future, especially by the means of some “big red button.”

    As I see it, our concerns here stem from and are illustrated by the following syllogism: (1) humans are intellectual, moral creatures, (2) artificial intelligence eventually surpasses human intellect, and (3) artificial intelligence lacks the same moral code as human persons.

    The question is: at what point is artificial intelligence sufficiently cognitive, emotional, and moral that it becomes immoral for human persons to exert control over them? Will humans, in future years, come to find morally consistent the view that if AI is more cognitively advanced than humans, it is inappropriate, let alone immoral, for humans to deny the same civil, political, and “human” liberties to AI? And if this “robot civil rights movement” becomes popular and widely accepted, then even if humans have the scientific knowhow to overcome the superintelligence concerns, will we have the willpower?

    One last note: I’m a little surprised that through twelve comments no one has mentioned Westworld yet! I’m a big fan of the show but one of the things that it underscores for me is that these questions won’t be going away anytime soon. And it’s not because the questions of robot morality, of superintelligence, etc. are particularly pressing, important, or even provocative (although they probably are), but they’re sexy! Who wouldn’t want to watch an epic western where the robots take over? Because of this, policymakers in this field (more than, say, epidemiology) have an added challenge in realistically and responsibly bringing the average person into this policy debate and making it accessible and real for all who stand t one impacted by any ramifications.

  18. This is a fascinating topic for discussion. As others have mentioned in the comments, I believe that predictions about the potential dangers of artificial intelligence are very unreliable simply due to the fact that we are still very far away from true artificial intelligence – one that can think and learn independently. While the computers are becoming more advanced and machine learning is developing rapidly, it still seems like these developments make the computers good at a variety of tasks, but nowhere near the intelligence and capabilities of humans. At least not yet.

    I heard a talk about a year ago by one of the professors in the computer science department regarding one potential path to creating AI – replicating the human brain. After all, our brain is nothing but an (incredibly) advanced biological mechanism, and understanding all of the underlying structure would enable us to build a similar one, but integrated into machines, rather than humans. I believe that such a development might make it possible to have an AI that thinks in ways that are similar to humans. However, morals and ethics is another question, and I cannot say to what extent it would be possible to teach this machine to behave ethically – especially since us humans are wildly diverging in our values too.

    I think that self-awareness is probably the biggest indication that we are dealing with a true artificial intelligence rather than a hyper-advanced machine system. What distinguishes Ava from a computer is the fact that she what she is, and is able to question aspects of her existence that don’t immediately make sense. You could probably argue that it is this self-awareness that makes Ava both interesting and scary, since without out, she would not question her imprisonment and testing in the facility. I think that ‘quality superintelligence’ discussed by Bostrom will be able to achieve self-awareness at some point regardless of human input, and it is therefore vital to think about possible methods for preventing this, if such methods exist at all. It is, however, unclear just how this will happen and whether or not the AI will decide, after becoming self-aware, that it is superior to its creators and should therefore be serving its own interests that might not be in alignment with ours. I don’t really see a way to prevent the AI from thinking this way once it has achieved self-awareness, but I am no expert, and this might be entirely possible. One might probably argue that preventing self-awareness in the first place would defeat the whole purpose of creating an AI in the first place, and to some extent I agree with this notion, but it is still important to think of ways to guide a self-aware AI so that it will still remain beneficial to us.

    For now at least, the potential dangers that AI represents for humans is more closely tied to potential changes in the social order, loss of jobs, institutional changes and improper or nonexistent regulation. These issues we can deal with with proper preparation – something that has already been discussed in other comments. Artificial intelligence such as Ava or AM from “I Have No Mouth, and I Must Scream” remains far too distant, and the latter especially is far more malevolent towards humans than a real AI is likely to be, but, for now, it is difficult to make concrete predictions.

  19. With Bostrom being a philosopher himself, it made me think that it is surprising that the qualities that so many think will define a superintelligence have not been pushed back on, even for the sake of exploration. Questions that come to mind include: does superintelligence necessarily mean the domination over humans through violence or manipulation? Does superintelligence necessarily mean machines will need or “desire” interaction with humans? Would super intelligent machines even “want” to or need to stay on Earth?

    Bostrom’s categories of superintelligence (2014) can help frame this thought experiment. Like Delaney mentioned, quality superintelligence is where the “fears of AI” stem from. I would argue that this is because we cannot even fathom the characteristics of higher levels abstract reasoning. Does intelligence or “cleverness” (2014) mean increasing efficiency by whatever means necessary, for example, killing off people to end a food crisis? Or would “long range planning” and other augmented capabilities of abstract reasoning lead AI to opt for more peaceful solutions to problems? Many religious principles and philosophical ways of reasoning tend to point to the idea that more enlightened or intelligent beings tend to be more peaceful, as they push themselves to get rid of biases. If biases are a result of misunderstanding or sociocultural context, which is what we currently understand them to be, then computers would naturally rid themselves of biases over time. However, the other side of the coin is the popular belief that, with increased intelligence comes somewhat of a decrease in emotional intelligence. This human version of increased intelligence gives insight into what happens when logic rules over empathy. Intelligence is a double-edged sword. Biases can be removed and peace can be sought, or, logic and efficiency in the absence of compassion can lead to devastation. Is compassion a part of intelligence? What is intelligence in its final form? An answer to this last question would leave us with much more excitement or fear for the future of AI.

  20. What seems to me to be the problem with a safe superintelligence is the proper communication of intent to the AI. Humans simply do not have the capacity to help the AI make decisions at every crossroads in its decision-making process, which will probably be an enormous amount of decisions per second. Instead we can give them laws to follow, but general rules also leave some room for interpretation.

    Take the movie iRobot for example. The beginning of the film states three laws for robots:
    Law #1 – A robot cannot harm a human being, nor through inaction, allow a human being to come to harm

    Law #2 – A robot must obey a human being, except when it conflicts with the first law

    Law #3 – A robot must strive to protect its own existence, except where it conflicts with the first two laws.

    The problem lies with the first law. There can be, and in the movie there are, situations in which those two objectives come into conflict. An AI system may decide to systematically eliminate the humans most likely to inflict harm another in order to “make the world more safe.” That decision would raise obvious issues for humans and would prompt more humans to fight against the AI, which would also place them in the violent category and cause more deaths. If we do decide to pursue the highest form of AI, we must be very careful for setting those goals for AI that are meant to coincide with ours. Otherwise the AI’s will believe that our mutual ends justify any sort of means.

  21. Eli brings up an important method of framing the moral qualms that our society has with artificial intelligence and superintelligence, which is the lack of a moral code in these AI robots. But what is crucial to understanding this problem is that before any sort of moral comprehension is possible for these robots, intelligent machines will first and foremost need to learn empathy and compassion. This is true not just for robots, but humans as well. Without the presence of empathy or compassion in humans, the field of ethics would not exist, and there would be no need for a moral code.

    Yet, how are AI supposed to learn empathy and compassion in order to encode a sense of morality in their programming? As Ackerman noted, the problem with a ban against AI weapons is that no letter or UN declaration is going to be able to “prevent people from being able to build autonomous, weaponized robots” (Ackerman 2015). As such, the morally salient question is not whether AI weaponized robots ought to exist, but rather how they can exist ethically.

    The implications from Ackerman’s conclusions is that the responsibility lies with the humans to find a way of making these robots ethical. In other words, if it is inevitable that robots are to exist in the future, humans ought to find a way to use them ethically. I would challenge this slightly in that it is not up to the humans, but rather the robots themselves to hold themselves accountable for their actions. Just as humans, no matter the number of laws, punishments, or public shaming, are less capable of holding each other ethically accountable as an individual is capable for himself, so too must be the case for these AI. And, in order for that to occur, these intelligence robots must be able to learn empathy and compassion in order to have a foundation from which moral codes can emerge. Even then, human emotions and human sentiments are much more than just a string of codes that produce one set of results. There are variances, there are nuances, and there are colors–elements that perhaps may never be reproduced, let alone exceeded, by robots, no matter their superintelligence.

  22. I strongly believe that we are in the middle of a transition to a new era of human development. Just as the agricultural revolution, the industrial revolution, and the information age all ushered in unprecedented human achievement, I believe we are entering what some call the augmented age, an era in which human development is augmented by highly intelligent computer systems. I think this new age bring incredible technological progress but will also give humans the power to create even more powerful and destructive weapons.

    I must disagree with the way the debate has been framed in some posts in this thread. I find ideas of “rogue AI” to be farfetched, at least in the near future. We do not need to be worried about a Skynet scenario in which AI deems humans to be incompatible with its ends and decides to wipe out humanity. Instead, what we should be concerned about in the augmented age is humans with nefarious purposes who have access to these super intelligent systems.

    We should not be so worried about T-800s as we should be about genetic engineered bioweapons. Already, intelligent computers are designing medicines to cure diseases in ways humans never thought possible. Imagine if we directed a more intelligent system to develop a highly contagious, highly lethal pathogen…

    Another concept we must come to terms with in the age of AI is that it is almost inevitable that AI will surpass human intelligence. In the scale of intelligence, we may imagine the least intelligent humans on the left end and people like Albert Einstein as the pinnacle of intelligence. This is far from reality. In truth, the scale of intelligence is much larger with the wide range of human intelligence occupying a relatively narrow band in this true intelligence scale. Thus, once AI systems begin to approach human levels of intelligence, they can very quickly surpass even the most intelligent of us.

    Already, AI is capable of tasks once thought impossible for computers. 50 years ago, a rudimentary computer could play tic-tac-toe. In 1997, Deep Blue beats Kasparov. And in 2016, Google’s AlphaGo beat Lee Sedol in the game Go, a game widely considered to require the highest levels of strategic thought. There is no reason for progress to stop. For better or for worse, I believe the augmented age will be the most influential and most defining age of humans’ development as a species.

  23. I would like to further elaborate on Yasmeen’s comment, through which she communicates that computers are only as impartial as the people who coded them. I believe that this statement raises further questions about the creators of a hypothetical “super-intelligence.” Something that has not been covered in this conversation remains the implicit power biases that are transferred from humans to their AI creations. The consideration of sociological relationships existing between different classes of humans in a research-driven society raise an entirely new set of questions.

    Primarily, who is creating the “friendly” AI and what kinds of biases arise as a result? If an elite group of scientists and researchers unilaterally control the development of a “super-intelligence,” what inherent discriminations does this create for the rest of humanity? To what cross-section of humanity is the “human-friendly” AI technology actually friendly to? Whereas it is theoretically-possible for an all-knowing technology that surpasses all human biases (regarding humans as a singular, homogenously-inferior set of beings), what kinds of exploitation can occur in the meantime as that technology remains in the making? Is it possible for the elites to create intelligences that deliberately oppress the disadvantaged along political and socioeconomic lines?

    If “super-intelligences” have access to the physical world, they may disproportionately endanger the poor — who do not have access to adequate resources to protect themselves against the risks of rogue technology. If the government chooses to embrace the development of “super-intelligent” artificial intelligences, then it must also consider protections needed to prevent the exaggeration of pre-existing socioeconomic inequalities within humanity. Anxieties of tyranny are further echoed by Hruy Tsegaye, an Ethiopian writer who expresses concerns about the restriction of high-tech to a select few (Goertzel, 2015).

    Government, in such a society, must also consider protecting the oppressed classes from the elite biases encoded into such a super-intelligence. In a hypothetical society in which “super-intelligences” are created, state actors play an increasingly complicated role in navigating the balance between fostering social equity and fostering the intellectual elitism ultimately needed to create such a technology in the first place.While the possibility of further exploitation of the poor and vulnerable should not inhibit or disqualify the potential advances made by the development of “super-intelligent” artificial intelligences, the sociological implications of the pursuit and realization of such a technology should be further examined. Whereas it is possible for these “super-intelligences” to solve the poverty that has plagued mankind for its entire existence, it is also possible for these technologies to be manipulated to the benefit of the elite who are responsible for their creation.

    Goertzel, 2015: http://jetpress.org/v25.2/goertzel.htm

  24. I think that all of this discussion about artificial intelligence and the seemingly inevitable rise of superintelligence underestimates how far away we currently are from such an event. Certainly, we have made enormous progress in improving the “intelligence” of our computers and other technology, but they are still incredibly far away from matching a human brain at general reasoning.

    Take the examples of computers beating humans at games such as chess and Go. Deep Blue, developed by IBM, defeated the reigning world champion at chess in 1996, while AlphaGo, developed by Google, won against a world-class Go professional in 2016. Why such a large time gap between these iconic events? Go has much simpler rules than chess, and consequently many more possibilities. However, despite its enormous complexity, even Go is still a game with perfect information and clearly defined rules and goals. While computers can certainly be extremely “creative” in their solutions (several of AlphaGo’s moves were praised by experts as astonishing and ingenious), the scope of the problems that computers can tackle has to be extremely limited. How does one go about asking a computer to work on solving a complex problem like fighting climate change when we can’t even begin to describe all of the factors involved and their relation to each other and to the problem?

    We can see that though computers are extremely capable, they are still limited to working on clearly defined problems of very limited scope, and in my opinion are still very far away from approaching the amazing flexibility of a human brain. This is not to say that they will never reach such a level, only that we probably have a long time to ponder such issues regarding superintelligence before they become relevant.

  25. Nicky, you raised several thought provoking ideas. If we were to create a self-aware superintelligence, what would be its view of its creators? Is AI capable of compassion and emotion? In general, I believe that the second question is the more thought provoking, as it opens up a whole new set of ethical and moral dilemmas. Bostrom in his article discusses the former, yet even he concedes that the AI he mentions does not exist in the present and may not exist in the near-future. While we must certainly discuss what the development of a superintelligence means for humanity, I think that it is much more important to talk about the systems which we currently have.

    We already have cars and planes which can drive and fly themselves, machines which can take complex inputs, analyze them, and determine the correct action. Two seniors I know recently built an AI controlled plane which can operate autonomously once in the air. If two college students can build such a device, how long before autonomous weapons are employed widely on the battlefield? I have repeatedly heard discussion amongst naval and air force aviators, urging cadets and midshipmen not to select “pilot” as their service of choice, simply because in 20, 15, or even 10 years, there will be no manned aircraft.

    Yet pilots in general represent only a small fraction of employment across the US; industries like trucking and goods distribution employ millions of workers across the country. What happens when AI replaces truck drivers and we have millions more suddenly unemployed? And it is not just low-skilled jobs like truck drivers, mechanics, and restaurant workers who may have their livelihoods at risk. What happens when we develop AI capable of proofreading and analyzing contracts, diagnosing diseases (as some AI, like IBM’s Watson, are already capable of doing), and teaching? Will we still need lawyers, doctors, and teachers? I see a society of unemployed as a more dangerous consequence, and also a more likely one, than a malevolent superintelligence which seeks to destroy its creator.

  26. I’m fascinated by the discussion above, and I’d like to add another topic into the conversation. In addition to the ethical and moral dilemmas posed by artificial intelligence, I also wonder about the challenges posed by legislation. By legislation, I am referring to real-life laws and regulations that are in place to ensure a decent quality of life for human beings.

    Contemporary regulations are often detrimental to newer technologies. For instance, one scientist claimed a few years ago that the single biggest obstacle for the mass adoption of automated vehicles is public legislation. In his words, governments are simply not prepared to create new laws that will oversee how insurance companies handle accident cases, whether they should simultaneously allow human drivers the freedom to drive should they wish, what to do with the current system of street lights, etc. These problems would likely be compounded in the case of superintelligence. For instance, AIs would probably be intelligent enough to drive out all human competition from financial markets. What legislations should be put in place to ensure that the AI does not structure the markets in a way that is detrimental to human society? Another potential problem may be the medical field: if society eventually comes to the point that major surgeries are handled by AI, what legislation is in place regarding the responsibility for a patient’s death? These are not easy questions, and considering the widespread impact that an AI will have all aspects of human society, be it the economy, the voting system, etc., it may simply be the case that humanity is simply not ready to handle all the legislative implications of a superintelligent being.

    One potential solution for this dilemma is to initially limit the operations of a superintelligent AI to a single, relatively risk-free industry field, such as the operation of a bottling factory. Societies could experiment with various regulations and laws over time to ensure that legislation is in line with the capabilities of the AI, before moving onto another industry or field.

  27. The above conversations are fascinating, and address the inherent conflict in creating AI that could possibly think beyond the capacity of mankind. Aside from the question of how feasible this level of AI is, I wonder even if AI was developed and programmed with the “correct” morals, what security protocols would be possible? When Ed Skoudis lectured in class and explained his Counter Hack company and methodology, a big takeaway was the potential for any network enabled technology to be hacked. Anything from phones to Barbie dolls could be intercepted and weaponized with the right human ingenuity and loophole.

    When and if AI is developed, it may be possible for the carefully crafted regulations on the AI to be superseded by hackers. When a Barbie Doll is connected to the internet it could be hacked to become burning hot. With extremists driving cars through crowds in Nice and Tel Aviv, imagine what would happen if they were to engage in a battle of man verses machine for control of a self-driving car. It would be a true test of the “intelligence” of the AI, and may require a self-driving cars scope to expand beyond the limited abilities that have already been developed. Maybe traditional hacking risks could be assuaged if the thing being hacked was able to fight back for itself, with an ability to adapt to the risks. However, that ability to adapt could result in other risks and while they are probably not possible in the foreseeable future, are worth considering. Sixty years ago the dominance and ability of personal computers was unfathomable.

    Nikki’s question about whether we end up with an AI like the Terminator or like Ava is a good one, and there is a fascinating discussion in the above comments over the value of imbuing AI with emotions or a certain level of intelligence. However, I think another dystopian comparison to consider may be a possibility like Avengers: Age of Ultron. As dumb as many aspects of that movie may be, there could maybe a battle for control of AI forces like the one between Ultron and the Avengers. But, the AI could be the bad guy or the good guy, and a person could be the bad or the good.

Leave a Reply