On Superintelligence

First, for anyone who is a little lost, wants a simpler explanation, or is really interested in the topic, I found a funny, detailed blog post that has some graphics and examples that explain AI and superintelligence pretty well. (From what I can tell).

waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

It also has this graphic which I think articulates some of the ideas from Bostrom’s article in a visual way.

PPTExponentialGrowthof_Computing-1

In his article, Bostrom describes the coming of a moment in which artificial intelligence will surpass the intelligence of a human mind. This moment, Bostrom stresses, is both closer than we think and incredibly dangerous. At this point, AI will be able to improve itself and replicate and an intelligence boom will occur. The biggest question when this occurs is whether or not the goals of the AI will coincide with the goals of the human race. Bostrom hopes that such an AI will, but fears what would happen if it doesn’t.

I have several questions. First, do you buy it? Do you believe that by the time our generation is nearing death (2060-2080) AI will have become superintelligent? If so, what would the implications of such a world be? If AI is capable of performing all work, would human beings serve any real function at all?

Also, how do we make policy regarding AI? Should the government draw the line at superintelligence and only allow AI systems up to that point? Or do we encourage the responsible development of AI to any level? — Kennedy