Is intelligence explosion necessary for doomsday?
post by Swimmy · 2012-03-12T21:12:07.369Z · LW · GW · Legacy · 6 commentsContents
6 comments
I searched for articles on the topic and couldn't find any.
It seems to me that intelligence explosion makes human annihilation much more likely, since superintelligences will certainly be able to outwit humans, but that a human-level intelligence that could process information much faster than humans would certainly be a large threat itself without any upgrading. It could still discover programmable nanomachines long before humans do, gather enough information to predict how humans will act, etc. We already know that a human-level intelligence can "escape from the box." Not 100% of the time, but a real AI will have the opportunity for many more trials, and its processing abilities should make it far more quick-witted than we are.
I think a non-friendly AI would only need to be 20 years or so more advanced than the rest of humanity to pose a major threat, especially if self-replicating nanomachines are possible. Skeptics of intelligence explosion should still be worried about the creation of computers with unfriendly goal systems. What am I missing?
6 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2012-03-19T18:21:55.891Z · LW(p) · GW(p)
Skeptics of intelligence explosion should still be worried about the creation of computers with unfriendly goal systems.
I agree, intelligence explosion seems to be irrelevant in motivating FAI. It increases urgency of the problem, but not dramatically, since WBE sets that time limit.
comment by Viliam_Bur · 2012-03-13T11:51:09.252Z · LW(p) · GW(p)
I think a non-friendly AI would only need to be 20 years or so more advanced than the rest of humanity to pose a major threat, especially if self-replicating nanomachines are possible.
Did you mean 20 human-years more advanced? Because the intelligence that could process information much faster than humans could possibly reach this level in a week or in a minute. Depends on how much faster would it be. We might underestimate its speed, if it would be somewhat clumsy at the beginning, and then learn better. Also if it escapes, it can gather resources to build more copies, thus accelerating itself more.
Replies from: Swimmycomment by [deleted] · 2012-03-13T01:34:26.268Z · LW(p) · GW(p)
I'm really not sure a human level AI would be at all that much of an advantage when it comes to developing technology at an accelerated rate, even at dramatically accelerated subjective time scales. Even in relatively narrow fields like nanotechnology, there are thousands of people investing a lot of time into working on it, not to mention all the people working in disparate disciplines which feed intellectual capital into the field. That's likely tens or hundreds of thousands of man hours a day invested, plus access to the materials needed to run experiments. Keep in mind that your AI is limited by the speed at which experiments can be run in the real world, and must devote a significant portion of its time to unrelated intellectual labor in order to fund both its own operation, and real-world experiments. In order to outpace human research under these constraints, the AI would need to be operating on timescales so fast that they may be physically unrealistic.
In short, I would say it's likely that your AI would perform extremely well in intelligence tests against any single human, provided it were willing to do the grunt work of really thinking about every decision. I just don't think it could outpace humanity.
comment by Giles · 2012-03-13T03:36:47.590Z · LW(p) · GW(p)
Yes. My (admittedly poor) judgment is that while I'd certainly be crushed by an unfriendly intelligence explosion, I probably wouldn't survive long in something like Robin Hanson's "em world" either.
But answering the intelligence explosion question becomes important when it comes to strategies for surviving the development of above-human-level AI. If unfriendly intelligence explosions are likely then it severely limits which strategies will work. If friendly intelligence explosions are possible then it suggests a strategy which might work.
comment by faul_sname · 2012-03-12T22:24:42.012Z · LW(p) · GW(p)
I'm pretty sure it wouldn't need to be nearly that advanced. A few modestly intelligent humans without ethical restrictions could already do an enormous amount of harm, and it is entirely possible that they could cause human extinction. Gwern has written some truly excellent material on the subject, if you're interested.