Posts
Comments
Eliezer, you’re assuming a very specific type of AI here. There are at least three different types, each with its own challenges: 1.An AI created by clever programmers who grasp the fundamentals of intelligence. 2.An AI evolved in iterative simulations. 3.An AI based on modeling human intelligence, simulating our neural interactions based on future neuroscience.
Type 1 is dangerous because it will interpret whatever instructions literally and has as you say “no ghost.” Type 2 is possibly the most dangerous because we will have no idea how it actually works. There are already experiments that evolve circuits that perform specific tasks but whose actual workings are not understood. In Type 3, we actually can anthropomorphize the AI, but it’s dangerous because the AI is basically a person and has all the problems of a person.
Given current trends it seems to me that slow progress is being made towards Type 2 and Type 3 Type 1 has stymied us for many years.
Ben,
You say "raw processing power isn't the crucial element." I said that speed "is irrelevant to displaying intelligent thought." We're actually saying pretty much the same thing! All I was really trying to argue was that phrases like "the speed of transistors" need to be replaced with phrases like "the accuracy, retention, and flexibility of transistors." I was -not- trying to argue against the principle that being able to turn the product of a process back on improving that process will result in an exponential growth curve of both intelligence and productivity.
We get plenty of calculation power out of the meat in our brains, but it is unfocused, inaccurate, biased, and forgetful. Performing lots of "flops" is not our weakness. The reason that recursive self-improvement is possible in transistor-based entities has nothing to do with speed--that's the only point that I'm trying to make.
We should be wary not because the machine can think thoughts faster than we can, but because it can think thoughts -better- than we can.
" . . .if you want to know why I might be reluctant to extend the graph of biological and economic growth over time, into the future and over the horizon of an AI that thinks at transistor speeds and invents self-replicating molecular nanofactories and improves its own source code . . ."
Machine intelligence has long been rated in raw speed of calculation. There is plenty of processing power available. If I handed AI researchers a computer an order of magnitude faster than what they were working on, their failures would certainly fail faster, which is an advantage, but there's no reason to think that they would necessarily be able to create an AGI immediately. If we knew how to code an AGI, we could do it today and run it on slower machines. Sure, it might take 10-100 times as long to think as machines that we will have in a few years, but that is irrelevant to displaying intelligent thought.
The main advantage of transistor technology is consistency, not speed -- Transistors don't forget. Absolute knowledge retention is the advantage that Deep Blue has over regular chess players. The speed element simply makes it possible for it to play chess at a speed that doesn't make humans bored.
Of course, it may be that human-like intelligence and creativity requires a sort of messiness. I worry that absolute precision and (human-style) creativity are somewhat incompatible, at least in a single entity. Undoubtedly, however, an AGI could at least be constructed that is much better at both than we are.