[SEQ RERUN] AI Go Foom

post by MinibearRex · 2012-11-09T05:37:00.426Z · LW · GW · Legacy · 8 comments

Today's post, AI Go Foom was originally published on November 19, 2008. A summary:

 

Robin Hanson's attempt to summarize Eliezer Yudkowsky's position.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Whence Your Abstractions?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

8 comments

Comments sorted by top scores.

comment by RolfAndreassen · 2012-11-09T17:27:36.669Z · LW(p) · GW(p)

Hum. Suppose that increasing the intelligence of an AI requires a series of insights, or searches through design space, or however you want to phrase it. The FOOM then seems to assume that each insight is of roughly equal difficulty, or at least that the difficulty does not increase as rapidly as does the intelligence. But it does not seem obvious that the jump from Arbitrary Intelligence Level 2 to 3 requires an insight of equal difficulty as the jump from 3 to 4. In fact, intuitively it seems that jumps from N to N+1 are easier than jumps from N+1 to N+2. (It is not immediately obvious to me what the human intelligence distribution implies about this. We don't even know, strictly speaking, that it's a bell curve, although it does seem to have a fat middle.) If, to take a hypothetical example, each jump doubles in difficulty but gives a linear increase in intelligence, then the process won't FOOM at all - it'll go asymptotic horizontally, albeit perhaps at a level much above a genius human's. Even if the difficulty increases only linearly while granting a linear increase in intelligence, that keeps the time required for each jump constant. That doesn't rule out arbitrarily intelligent AIs, but it does mean the increase doesn't show an asymptote. (Depending on the time constant, it could even be uninteresting. If it takes the AI ten years to generate the insights to increase its IQ by one point, and it starts at 100, then we'll be waiting a while.)

Now, neither of those possibilities is especially likely. But if we take the increase in difficulty per level as x, and the increase in intelligence per level as y, and the time to the next insight as proportional to (x/y), then what reason do we have to believe that x < y? (Or, if they're roughly equal, that the constant of proportionality is small.)

Replies from: bsterrett, Giles, MinibearRex
comment by bsterrett · 2012-11-09T19:30:09.082Z · LW(p) · GW(p)

Eliezer's stated reason, as I understand it, is that evolution's work to increase the performance of the human brain did not suffer diminishing returns on the path from roughly chimpanzee brains to current human brains. Actually, there was probably a slightly greater-than-linear increase in human intelligence per unit of evolutionary time.

If we also assume that evolution did not have an increasing optimization pressure which could account for the nonlinear trend (which might be an assumption worth exploring; I believe Tim Tyler would deny this), then this suggests that the slope of 'intelligence per optimization pressure applied' is steep around the level of human intelligence, from the perspective of a process improving an intelligent entity. I am not sure this translates perfectly into your formulation using x's and y's, but I think it is a sufficiently illustrative answer to your question. It is not a very concrete reason to believe Eliezer's conclusion, but it is suggestive.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2012-11-10T19:58:03.544Z · LW(p) · GW(p)

Two objections to this: Firstly you have to extrapolate from the chimp-to-human range and into superintelligence range. The gradient may not be the same in the two. Second, it seems to me that the more intelligent humans are, the more "the other humans in my tribe" becomes the dominant part of your environment; this leads to increased returns to intelligence, and consequently you do get an increasing optimisation pressure.

Replies from: bsterrett
comment by bsterrett · 2012-11-10T21:12:07.576Z · LW(p) · GW(p)

To your first objection, I agree that "the gradient may not be the same in the two," when you are talking about chimp-to-human growth and human-to-superintelligence growth. But Eliezer's stated reason mostly applies to the areas near human intelligence, as I said. There is no consensus on how far the "steep" area extends, so I think your doubt is justified.

Your second objection also sounds reasonable to me, but I don't know enough about evolution to confidently endorse or dispute it. To me, this sounds similar to a point that Tim Tyler tries to make repeatedly in this sequence, but I haven't investigated his views thoroughly. I believe his stance is as follows: since a human selects a mate using their brain, and intelligence is so necessary for human survival, and sexual organisms want to pick fit mates, there has been a nontrivial feedback loop caused by humans using their intelligence to be good at selecting intelligent mates. Do you endorse this? (I am not sure, myself.)

comment by Giles · 2012-11-10T04:05:01.940Z · LW(p) · GW(p)

One way to imagine what might be going on inside an AI is that it's essentially running a bunch of algorithms. One important class of insights is coming up with new algorithms that do the same job but with lower complexity. Small improvements in complexity can lead to big improvements in performance if the problem instances are big enough. (On the other hand, the AI might be limited by the speed of its slowest algorithm). The history of computer science may give data on how much of an improvement in complexity you get for a certain amount of effort.

I'm not sure if this kind of self-improvement is sufficient for FOOM though - the AI might also try entirely new algorithms and approaches to problems. I don't have much of a feeling for how important that would be or often it would happen though (and it would be pretty difficult to analyze the history of human science/tech/economics for those kinds of events).

comment by MinibearRex · 2012-11-09T18:51:57.481Z · LW(p) · GW(p)

As the problems get more difficult, or require more optimization, the AI has more optimization power available. That might or might not be enough to compensate for the increase in difficulty.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2012-11-09T19:43:36.165Z · LW(p) · GW(p)

Yes, that's what I said! The point is that, if we are to see accelerating intelligence, the increase in optimising power must more than compensate for the increase in difficulty at every level, or at least on average.

comment by Luke_A_Somers · 2012-11-09T15:43:51.114Z · LW(p) · GW(p)

Wow, Phil. You manage to make saving the universe from a paperclipper sound so bad! If there's a slow takeoff and malign AIs keep cropping up and they're so hardened that people have to die to prevent them from escaping... then yes, people will die, one way or another. But your way of putting it makes it seem like that's Eliezer's fault. It's not CEV's fault that someone's about to make a paperclipper.

It's like saying, "Having a disaster preparedness plan will kill people, because it prioritizes saving the community over individuals who are in poor positions to escape."

Well, yes. When a tsunami comes and you have a disaster preparedness plan, some people will die. But they won't all die, because you went and did something about it.

...

And the transported-back-to-1200 thread is ridiculous. In that scenario, you have just been stripped of all of the infrastructure required to bring your situation about. It's completely 100% incomparable to anything on-topic.