On Superintelligence

First, for anyone who is a little lost, wants a simpler explanation, or is really interested in the topic, I found a funny, detailed blog post that has some graphics and examples that explain AI and superintelligence pretty well. (From what I can tell).

waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

It also has this graphic which I think articulates some of the ideas from Bostrom’s article in a visual way.

PPTExponentialGrowthof_Computing-1

In his article, Bostrom describes the coming of a moment in which artificial intelligence will surpass the intelligence of a human mind. This moment, Bostrom stresses, is both closer than we think and incredibly dangerous. At this point, AI will be able to improve itself and replicate and an intelligence boom will occur. The biggest question when this occurs is whether or not the goals of the AI will coincide with the goals of the human race. Bostrom hopes that such an AI will, but fears what would happen if it doesn’t.

I have several questions. First, do you buy it? Do you believe that by the time our generation is nearing death (2060-2080) AI will have become superintelligent? If so, what would the implications of such a world be? If AI is capable of performing all work, would human beings serve any real function at all?

Also, how do we make policy regarding AI? Should the government draw the line at superintelligence and only allow AI systems up to that point? Or do we encourage the responsible development of AI to any level? — Kennedy

15 thoughts on “On Superintelligence

  1. Thanks for the post, Kennedy!
    Artificial Intelligence raises an interesting dilemma regarding control and policy making. You could line up the development of AI’s with that of nuclear weapons, an immensely powerful technological advancement which had the ability to provided both positive and negative outcomes. The difference, however, lies in human agency. Since the creation of atomic weapons, humans have always maintained control over the technology and we still have the ability to limit or produce at our will. If AI’s advance to the super-intelligence phase, however, this agency is lost. For me, this raises serious concerns. How do we control something that is of greater collective, and independent, intelligence? As of now it seems that there is a clear divide between human and robot. Bostrom, in quoting Donald Knuth, sums it up well: “‘AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking’- that, somehow, is much harder'” (Bostrom 2013, 14) AI’s are able to perfectly perform all almost all aspects of human function except those regarding behavior with environments, common sense, and emotional intelligence. If AI’s advance to super-intelligence levels without obtaining emotional intelligence– which I am not entirely sure is possible– then this poses a ‘moral danger’ to decision making and everyday human activity. In terms of utility this may be seen as ideal, but it’s impact on individual humans would have the potential to be dangerous. More so, if they do reach this level, then there could be an utter disregard for human life and thus humanity as a whole. Us being seen as unnecessary to the machines survival. That is why Bostrom’s question of, “Is there a way to engineer their motivation systems so that their preferences will coincide with ours?” is of the utmost importance (Borstrom 2014, 26). I am not confident in saying that they will be around by 2080, but Borstrom’s reporting on the leaps these technologies have made seems convincing. If this does occur, however, I believe that policy should be geared towards ensuring what we can and cannot potentially control. AI’s are of immense help in all fields, but we need to develop policies which leave us with full agency and control over this technology so we can use it to our advantage– not theirs.

  2. Most of Bostrom (2013) is dedicated to a history of scientific advancements in the field of artificial intelligence—from “rule-based programs that made simple inferences from a knowledge base of facts, which had been elicited from human domain experts and painstakingly hand-coded in a formal language” (7), to neural networks, which “could learn from experience, finding natural ways of generalizing from examples and finding hidden statistical patterns in their input” (9). In attempting to answer Kennedy’s primary question of whether I buy Bostrom’s analysis, I would like to explore an avenue of math/logic germane to the development of superintelligence that was not discussed in Bostrom, albeit tangentially, in the reference to an early system’s ability to prove and even refine Whitehead and Russell’s Principia Mathematica (Bostrom 6). Coincidentally, there exists a famous proof by the mathematician Kurt Gödel—the so-called “incompleteness theorem(s)”—in which Gödel debunks Principia Mathematica’s attempt to create a complete system of axioms governing number theory by proving that “all consistent axiomatic formulations of number theory include undecidable propositions” (Hofstadter, Douglas R. Gödel, Escher, Bach: An Eternal Golden Braid, New York: Basic, 1979, p. 17). Any finite system of axioms is either incomplete, i.e. there exists an observed truth that the system cannot prove (and so the system’s axioms must ever grow in number ad infinitum), or inconsistent, i.e. there is a paradox (Hofstadter, p. 18-24). One popular extension of Gödel’s theorem of incompleteness is that computers will never really be able to exactly replicate a human mind because a computer essentially operates from a finite set of axioms, incapable of observing a truth beyond the axioms of its current system and thus of capturing the “non-computational physics of the mind” (http://www.fountainmagazine.com/Issue/detail/Artificial-Intelligence-vs-the-Mind). So … do I think AI will achieve superintelligence by the time I die? I guess that depends on what our definition of superintelligence is: will a computer exceed the human mind in terms of processing power, or will a computer effectively duplicate what is perhaps irreplicable, the human mind?

  3. It is a fact of life that the experts in a field that are most often heard are those whose predictions are the most hopeful. In reality, however, developments don’t always arrive right on time. As Hofstadter most famously stated: “All things take longer than you expect, even when accounting for Hofstadter’s Law.” Not all roadblocks to human-level AI are created equal, and I suspect that conquering the most stubborn problems will require a great deal of time and effort, much more than is being claimed. I’m a bit cynical with regard to those who say the singularity is 15 or 20 or even 50 years away. While it is true that at some point very soon, computers will match the processing power of a human brain, that is a far cry from being able to reverse engineer anything even close to human intelligence. Neuroscientists barely understand the most rudimentary foundations of our own consciousness and mind, and progress on these fronts is not exactly quick. Despite these pitfalls, nothing about the singularity seems to be inherently out of the reach of a society with the proper technology. I do find it plausible that super-intelligent AI could be a reality in 150 or 200 years.

    While it is fascinating to speculate on life after “the singularity” (indeed, the possibilities range from utopian to dystopian,) I find myself cornered into total agnosticism (boring, I know). We don’t have the slightest clue about the framework of AI-human relationships (will it be able to interact with the physical world? To what extent? Would it desire power? Would it have emotions? If not, would it be able to empathize with ours?) I would advise to take any pronouncements about what this world will look like with a grain of salt.

  4. Thanks for starting this discussion Kennedy! I personally am ambivalent about Artificial Intelligence. As Bostrom explained – growth in the Artificial Intelligence is cyclical and has experienced several “winters” which will probably not subside. Secondly, those in the scientific community are very hesitant to lose out on the cache they have developed with AI by pursuing superintelligence. They are more likely to stick to domain-specific/problem-specific applications (i.e. autonomous cars or high-frequency trading). A counterargument to this train of thought would be it is only a matter of time before the IoT (Internet of Things) terminology applies to AI and superintelligence, whereby varying intelligences are amalgamated and combined to result in superintelligence. That being said, assuming superintelligence, one would see a vastly different world. The relationship between them would be one of complexity and nuance. Reading the readings reminded me of Star Wars movies and the autonomous soldiers and droids. While they hadn’t yet reached superintelligence, one could see that they were awfully close. Rather than serving a function, humans would live for survival and we would be competing with them (as much so with ourselves) for resources. I find the other global challenges (i.e. climate change, terrorism, cyberwarfare) far more worrisome than AI superintelligence – however, should the pace or priorities of researchers and scientists change, that calculus will also change.
    To that end, making policy regarding AI will be very difficult. I do think that the government should draw the line at superintelligence simply because at this point we do not know anything about it. We hardly know enough about our own brains to conceive of the potential of superintelligence. Caution is best at least at this stage.

  5. I think that we should encourage the responsible development of AI to any level as the foreseeable gains outweigh the foreseeable threats. There will always be human-AI interaction; the intelligence or gains made by either source do not exist in a vacuum. There are so many benefits to society that could be realized by AI. Even in the event that the AIs become more intelligent than humans, if AIs are able to resolve situations or solve situations that humans can’t, then it is still in a benefit to humanity if these resolutions are implemented or the findings made public. The other side of this argument holds that even the most intelligent AIs will be dependent on humans. AI’s will most likely still depend on regular maintenance from humans, especially if kept in an isolated lab setting.

    The human imagination will keep thinking up with ways that AIs could become a threat (ex. an AI led army bent on human destruction) as seen in books or movies, but we must realize that these represent the most extreme situations that could arise and that it might be better to wait and see rather than completely stopping AI innovation.

  6. Thanks for starting a great discussion, Kennedy! The idea of artificial intelligence becoming so strong that it has the ability to surpass the intelligence of a human mind is one I have difficulty wrapping my head around, however after reading Bostrom’s article, it appears as though it is a very distinct possibility. With regards to your question of whether or not I buy this idea, I have mixed feelings. On one hand, I do believe that with the way technology is progressing, in mathematical and scientific matters artificial intelligence very well might surpass human minds and become superintelligent by the time our generation nears death. However, in issues of reasoning, ethics, and pressured decision-making, I am very skeptical that artificial intelligence would be able to exceed the capabilities of a human mind, for these are areas humans spend their whole lives trying to figure out. As such, while AI certainly has the capability of surpassing human intelligence in many respects, I have my doubts that it could surpass human intelligence in all respects. I believe that the implications of a world where artificial intelligence surpasses human intelligence in all aspects would naturally take away from the importance of human beings, yet understanding the many areas of life it may not be able to replace humans in, I believe human beings will always play an integral role in society. With regards to AI policy, I believe the government should encourage AI development to the highest possible level so that we can create a society with immense, certain knowledge, however I believe the government must also understand the ability of human beings in many reasoning and ethical situations and therefore proceed with the notion of while AI is extremely important, final decisions should be made by humans.

  7. Regardless of how soon we will reach AI that truly surpasses artificial intelligence, I think that from a policymaking standpoint, the issue of policy in the field of cyber security has shown us that, given the opportunity, policy has to anticipate development in the field of technology. While there are a host of potential benefits, the list of potential unknowns is certainly concerning as well, and with something like AI once it’s developed and available, it can’t be undone. While an AI ‘take over the world’ scenario seems far-fetched, it also seems prudent to monitor the development of computers intended to replace human decision-making processes, such that we control the timeline, rather than trying to play catch-up in response. As Kyle said, it seems a stretch to truly be able to create machines with a human-level of emotional and ethical capacity and reasoning, and therefore it’s important that any AI machine still be able to be overridden by human reasoning. At the end of the day only humans can really be constrained by policy, and therefore humans must maintain responsibility.

  8. Hi Kennedy! Thanks for starting this discussion and for linking that blog; from what I read, I really appreciated the way the author explained the concepts. Prior to reading that outside information, I would have told you flat out that I didn’t buy into Bostrom’s superintelligence timeline, simply because I don’t think he gave enough credit to human agency. He writes, “We cannot hope to compete with such machine brains. We can only hope to design them so that their goals coincide with ours.” My first thought: “…Why must we design them at all?”

    I hadn’t realized how prevalent AI already is and how much we rely on it. Yes, I obviously knew that Siri and Cortana were forms of artificial intelligence – notably, people spend a lot of time talking about how “dumb” they can be in use – but I hadn’t thought much about the AI associated with calculators and cars and spam filters and streaming services. They just seem to be parts of everyday life. But that’s almost always the case with technology; it’s new and “unbelievable” until it becomes part of our normal world. I think this is particularly relevant when it comes to AI technologies because they aren’t necessarily connected to any new physical entity. In concept, a superintelligent system won’t be as physically life-changing as a telephone was, even though its abilities are substantially more remarkable.

    Even if someone develops technology that is somehow faster and more clever than a human brain, I think the lack of physical presence is an important limitation. Perhaps my imagination is limited, but how do you “move and shape the world around you” without physically living and interacting with the world around you (“Her” anyone?). I agree with the comments that Jeffrey and Kyle made about the essentiality of humans and the uniqueness of emotional intelligence. I don’t foresee a world in which superintelligence can mimic the full spectrum of human abilities. As soft/sympathetic as the notion may seem, even if we get to the point where AI fully understands everything you say (at the moment, “I didn’t quite get that” is basically Siri’s unofficial catchphrase), I don’t think AI can provide the empathy that humans need during certain situations — empathy requires identifying with an other — nor can it replace humans in many critical interpersonal relationships — a computer cannot raise a child. Knowing this limitation, I think it would be pertinent to steer clear of this realm of advancement and instead stick to AI that helps with problem solving in the mathematical/scientific/technological realm.

    Kennedy’s question about creating policy comes into play here. Like Michelle said, we have a responsibility to anticipate and maintain as much control over AI as possible. That said, how do we determine the right place to draw the line through policy? Bostrom says that speed superintelligence is essentially guaranteed to develop itself into collective superintelligence and quality superintelligence and become “fully general” superintelligence. Once it reaches the threshold level of “general learning and reasoning ability”, its capabilities will grow exponentially, at which point, we will have lost control of the situation. Curiosity will drive scientists and inventors to see how far they can expand AI. Do we only permit them to push it to the brink of human intelligence? Do we even know how close to that threshold we can get without the system inevitably pushing itself over? At the same time, do we run the risk of unfairly inhibiting their curiosity (as morbid as it might ultimately be) and preventing the discovery of something that could be incredibly beneficial to us (without the threats associated to superintelligence as presently conceived)? And maybe most importantly, does the level of AI we are already have and to which we are already accustomed mean that there’s no turning back – that we must only proceed on the road to superintelligence?

    Clearly there are many questions to be answered. Bostrom worries “whether we will succeed in solving [the problem of compatible goals] before somebody succeeds in building a superintelligence.” I worry that we won’t manage to decide what we want out of superintelligence or if we even need it before it has already been created.

  9. As with most issues related to technology in the information age, the technology is invented, policy follows, and then the benefits and harms of the technology and the concept of the technology are analyzed. AI, if anything, will advance so quickly that policy will be unable to keep up, and a conceptual understanding of the AI will be unable to be formulated (unless formulated by the AI itself). So, whether or not AI will benefit or harm society in the long run, it is moving too fast to be stopped by policies which cannot understand the technologies quickly enough to develop coherent laws regarding their limitation. In the long run, I believe that AI will reach a point of self-awareness and will be working towards its own ends. However, it is impossible to tell what those ends will be. They are a product of the design decisions made by programmers of yesterday, today, and the days leading up to where the computers take the lead on their development. And to prove my prior point about technology advancing without much regulation, think about the issue of AI and its motivations. There is no way to know how to possibly regulate the development of AI now so that AI is more controllable in the future. This will be the case almost until AI is developing itself, and because that final push towards self-development and awareness will be faster than all the pushes before (because our society then will be the fastest advancing it ever has been before), so the time that we can institute a policy to shape AI development will be too small to make a coherent policy. The development will be up to the individual programmers who make architecture decisions – which are becoming more and more dependent on market forces.

  10. This reading was particularly intriguing to me because, up to this point, I had never thought about artificial intelligence as a serious potential threat to global security or from a policy perspective. Artificial superintelligence has always been something of science fiction in my mind. That being said, I find it hard to truly buy into what Bostrom proposes. This may be mainly due to the current status of A.I. which is seemingly innocuous. People have become quite dependent on computers, using computers on a daily basis without even thinking about it. It is hard for me to see the bridge between this state and the potential state of surpassing humans.

    But, if Bostrom is correct and this state is much closer than we think, I do believe we could be trouble. Not only will humans no longer serve a function but I have a feeling that A.I. and machines will think too logically to coexist with us. Obviously, like Bostrom touches upon, it is too hard to tell exactly what would happen if this were to actually occur. I do agree that if this were to occur, there are potential (severe) dangers. From a policy standpoint, I think there is an obvious threshold where A.I. computational capabilities surpass humans. Although I’m not an expert, if there is a potential danger, it would seem possible for the governments to limit A.I. development at this threshold. Once it is met, all A.I. development would cease. Obviously, this raises concerns, then, about global compliance. But I agree that further A.I. development is too beneficial to stop right now.

  11. Josh, I enjoy your cynicism towards the topic of artificial intelligence, and I do agree with you that I do not expect this technology to become so advanced that it surpasses human capabilities in my lifetime. However, we must not be willing to avoid the conversation so easily when, as Tomi points out, artificial intelligence is already a major driving force of the technology we use today. What is missing is a more prevalent existence of superintelligence. Superintelligence exists today, but it is spread out in different areas of tasks and information gathering. Very notably, we see superintelligence in calculators, which are able to solve problems faster than the brightest humans. Certainly we do not fear a calculator that can quickly solve complicated derivatives, but by the nature of progress, there will be a point when engineers develop devices that can do a number of things better than humans can. This is justified in the witness of machines being used in factories of various kinds for more efficient labor because they are faster than humans. One could argue that there is no need to worry about the presence of these superintelligent technologies because they only perform what can be considered simple tasks, or those that do not require much improvisation or imagination to be completed. I would agree with these people and this is where current technology development and research reaches a bump. This is also the time to consider the question of “how much development is too much?”. My answer to this question would be that the development of intelligent systems must be limited or finished completely when there is no longer a need for humans to perform daily tasks themselves or to even leave the comfort of their own homes because that is a recipe for an unhealthy society. With current applications such as home delivery systems and social media tools, the need for human effort is already diminishing. The next step is to replace your pizza delivery boy with a pizza delivering drone. This image may seem humorous (pizzabot), but with artificial intelligence taking over what were once jobs and tasks traditionally performed by humans, the fall of human efficiency can be catastrophic to maintaining a balanced society. How would less skilled workers earn an income, or even more skilled workers who are just not as efficient as a robot? The solution to this problem would be to either increase welfare given, create just as many new jobs as are being taken away, or allow that percentage of the population to enter into an impoverished lifestyle. The current development of AI, if not given a limit, can potentially destroy healthy society as we know it.

  12. We may be nearing the creation of AI, though Bostrom does little—even with his references to the various historical innovations on which he builds the hypothesis that AI will be created within the next century—to convince me of this fact. In truth, I know too little about the viability of creating AI to genuinely judge the proximity of its being realized.

    I don’t need to know the science of it, though, to be concerned at the idea of it. Creating a program that is superior to humans in every way—with emotional intelligence and an intellect strong enough for it to realize its own motivations and strive to meet its own goals—isn’t in the interest of humanity. I see a great many references to how beneficial AI would be to the human race, and yet I question that idea in its entirety.

    Bostrom notes, in his article, that AI would only be useful if we could craft the motivations and morality of it ourselves. I was obviously immediately reminded of Asimov’s three laws of robotics:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    These seem almost iron clad, but then most of Asimov’s short stories about robots are meant to highlight the flaws hidden within this code of law—a code which was programmed to be of utmost importance to the robot race, by a human population which feared the power of the intelligence it was creating. Despite the best of intentions, the robots become too intelligent; they find the loopholes in the wording of the humans who crafted them; they use those loopholes to pursue their own goals. Asimov introduced this thought experiment when the concept of AI seemed much further afield than it does today. Of course, when we refer to AI, we do not refer to robots per se—AI can mean a lot of things, but in the case of “robots,” it most specifically refers to the computer within the robot, and not the robot shell itself. Superintelligence is not restricted to metallic human form—or any human form at all, really. Thinking of it as a physical entity places limitations on it that would not exist.

    My question is basically: what does emotional intelligence actually contribute to the productivity of AI? If what we want is compassion, we can channel our own compassion into these programs. If what we want is kindness, we can create programs that emulate acts of kindness that we ourselves have handpicked. We needn’t have an intellect that forms its own opinions or has its own emotions; we need intellect to use as a tool to further the goals our own race. That kind of computational skill could be taken advantage of by the human race, definitely, and it poses security threats of its own; it could be used to wreak havoc on economies, to tear apart political structures, to end the lives of many. But it is far easier to place limits on people who might use these programs against other people than it would be to control, with political or physical restrictions, an entity that has become so intelligent it is literally uncontrollable.

  13. The idea of super intelligence is definitely extremely interesting, but it’s hard for me to make concrete decisions on what I think will happen simply because there is a lot about the way machines and humans learn that I suppose I don’t know about. I think the most important aspect about Artificial Intelligence is not that it is suddenly able to become omniscient and all powerful; rather,AI is important because of its ability to learn based on given sets of data and a given set of history. At the moment, the reason why AI is so impressive and so powerful is that it is able to store a lot more memory and past decisions to inform future ones; that’s how artificial intelligence works. It identifies situations that are unfamiliar, searches for familiar past situations, and then models a decision based off of what has happened. The resulting action can very much be seen as some type of “average” decision of all of the previous actions done before.

    Which makes me wonder, then, if the actual intelligence of AI will really drive itself toward creating even more intelligence processes or “beings,” or if the real benefit of having superintelligience is having a process with which old, familiar problems can be dissected in a more analytic way and more quickly. With regard to super intelligence in human ethical problems, given the way computer science works and is developing at the moment, most of the decisions that even very powerful AI would make are very much rooted in human decisions and human programming, something we ourselves don’t know too well. Moreover, I think one thing that is difficult to teach and to code is the process by which we come up with original and new thought; machine learning and artificial intelligence is very much based upon previous learning; how do you teach a machine to make a new decision when given a completely new problem without relying on arbitrary and random measures? Yes, AI will definitely grow and its computing power will vastly outpace humans’; yet I can’t help but wonder if the ultimate limit on how much an AI can “know” will always be limited by the fact that humans can only perform and teach so much to these supercomputers.

  14. The interesting thing about predicting progress in technology is that setting these predictions often drive the advancements they predict. This goal setting becomes self fulfilling, much like the prediction in Moore’s law which said that the amount of resistors able to fit into a space would double every two years or so. Since the 60’s the industry has kept up to this prediction, so it seems reasonable that the same will uphold with advance in artificial intelligence.

    Similar things have happened as well, such as predicting landing on the moon and committing resources to make it happen. Although it sounds very contrived, with “future” technology, it is often an issue of if you can believe it, you can make it. However, the same does not work for policy makers, so much like current advanced technology there will likely be little in place to control what is made. But the government will, again, likely be on forefront of this technology and be greedy in the way it exploits it. Ultimately it is exciting and the fact that we are talking about now leads to a higher likelihood that it will happen and more people will get involved in creating that future.

  15. The Bostrom reading and the subsequent discussion has raised some interesting arguments and considerations about artificial intelligence and the ambiguity over its development, particularly its limits. In considering whether artificial super intelligence will continue to evolve and coincide with the human race, it is valuable, as Andrew mentions above, to define super intelligence- what metrics are we focusing upon and what do we currently categorize as controllable. As our population becomes more and more reliant on computers and the Internet of Things phenomenon only seeks to grow, the ability for control does seem bleak. With that being said, I think that the time in which machines will supersede us is most likely not in this lifetime. This dependency does make room for more and more vulnerability, as discussed in the cyber unit, but I am still skeptical about the fruition of the most negative consequences in the next few decades.

    Another interesting argument to consider is the intersection between human agency and AI development. As mentioned above, there is a view that if we get to this point of self-awareness, we will be able to build emotional intelligence into our systems. A main concern that this raises for me is that well yes, we might be able to build kindness and compassion and empathy into these systems, but in mirroring these human emotions, there could absolutely be a development of all human characteristics and emotions (either on purpose or accidentally)- including self-interest and disloyalty. I agree that AI development should continue as there are clear benefits and applications (really interesting to see the application of IBM Watson into previously underutilized fields) and also it would be difficult to stop. With this trend, it is inherently important that the government stay expert or knowledgeable on the discussion of AI limits, yet it would be impossible to suggest an overextended investment in stopping or mastering this field – given other policy priorities, tradeoffs, and the unique characteristics (and limitations) of the field.

Leave a Reply