Why Stephen Hawking and Bill Gates Are Terrified of Artificial Intelligence by James Barrat is an article looking to explain why some prominent Scientists and leaders in the high-tech industry has voiced concerns over super-intelligence lately. It does a pretty good job of conveying the main potential issues with exponentially growing AI, and to point out what we need to address it: Ethics. From Barrat's article:
"With few exceptions, they're developing products, not exploring safety and ethics."
To me that means philosophers with hacking skills and a transparent society. Not only a transparent government (which seems to be the main focus right now), but a transparent industry as well. I got the above article from David Brin's list of potential game changers for the near future (20150427), and as Brin points out
"'Skynet' won't come out of the military. It will come out of the portions of our economy that win every political battle and every tax break."
I agree. Brin half jokingly (?) rounds off with
"Indeed, what better clue that our AI overlords have already… come awake?"
I concur: something 'alive' looking to gain energy would amass resources to grow more powerful, and do that by optimizing the environment in which it was operating. That is what we do. However I would go further in some sense (though Brin might be after this as well), and return to one of my old arguments: An AI is nothing more (or less) than a sufficiently complex system. If a computer can become intelligent (I prefer to think of it as alive and self-preserving to some degree), so can any system capable of computation. How about an organization?
Now of course there may well be limits in time and memory preventing just any computing system from AI capability. Or perhaps the limit is simply our time frame from thinking of these processes as 'intelligent'.
Yet, I still think we look too much on pure technology, because it is the best candidate. We think. Now how about the combination of computers and humans. Could not that be greater than the parts? Not to mention that a decent chunk of humanity is already so dependent on technology and complex logistic rule systems for survival that we might already call them post human.
I am not arguing we should not heed warnings. On the contrary, they represent a genuine indication that we deserve to see ourselves as intelligent instead of just cogs in a machine (which of course itself might be intelligent by this theory, but that is another story).
No, no. I am just entertaining the notion that the point of no return might not lie with a self programming computer in the future, but be events in the past; the loom, the rail road, the union, the TV-dinner.