The Weak Inside View

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-18T18:37:33.000Z · LW · GW · Legacy · 22 comments

Contents

22 comments

Followup to:   The Outside View's Domain

When I met Robin in Oxford for a recent conference, we had a preliminary discussion on the Singularity - this is where Robin suggested using production functions.  And at one point Robin said something like, "Well, let's see whether your theory's predictions fit previously observed growth rate curves," which surprised me, because I'd never thought of that at all.

It had never occurred to me that my view of optimization ought to produce quantitative predictions.  It seemed like something only an economist would try to do, as 'twere.  (In case it's not clear, sentence 1 is self-deprecating and sentence 2 is a compliment to Robin.  --EY)

Looking back, it's not that I made a choice to deal only in qualitative predictions, but that it didn't really occur to me to do it any other way.

Perhaps I'm prejudiced against the Kurzweilian crowd, and their Laws of Accelerating Change and the like.  Way back in the distant beginning that feels like a different person, I went around talking about Moore's Law and the extrapolated arrival time of "human-equivalent hardware" a la Moravec.  But at some point I figured out that if you weren't exactly reproducing the brain's algorithms, porting cognition to fast serial hardware and to human design instead of evolved adaptation would toss the numbers out the window - and that how much hardware you needed depended on how smart you were - and that sort of thing.

Betrayed, I decided that the whole Moore's Law thing was silly and a corruption of futurism, and I restrained myself to qualitative predictions (and retrodictions) thenceforth.

Though this is to some extent an argument produced after the conclusion, I would explain my reluctance to venture into quantitative futurism, via the following trichotomy:

So to me it seems "obvious" that my view of optimization is only strong enough to produce loose qualitative conclusions, and that it can only be matched to its retrodiction of history, or wielded to produce future predictions, on the level of qualitative physics.

"Things should speed up here", I could maybe say.  But not "The doubling time of this exponential should be cut in half."

I aspire to a deeper understanding of intelligence than this, mind you.  But I'm not sure that even perfect Bayesian enlightenment, would let me predict quantitatively how long it will take an AI to solve various problems in advance of it solving them.  That might just rest on features of an unexplored solution space which I can't guess in advance, even though I understand the process that searches.

Robin keeps asking me what I'm getting at by talking about some reasoning as "deep" while other reasoning is supposed to be "surface".  One thing which makes me worry that something is "surface", is when it involves generalizing a level N feature across a shift in level N-1 causes.

For example, suppose you say "Moore's Law has held for the last sixty years, so it will hold for the next sixty years, even after the advent of superintelligence" (as Kurzweil seems to believe, since he draws his graphs well past the point where you're buying "a billion times human brainpower for $1000").

Now, if the Law of Accelerating Change were an exogenous, ontologically fundamental, precise physical law, then you wouldn't expect it to change with the advent of superintelligence.

But to the extent that you believe Moore's Law depends on human engineers, and that the timescale of Moore's Law has something to do with the timescale on which human engineers think, then extrapolating Moore's Law across the advent of superintelligence is extrapolating it across a shift in the previous causal generator of Moore's Law.

So I'm worried when I see generalizations extrapolated across a change in causal generators not themselves described - i.e. the generalization itself is on a level of the outputs of those generators and doesn't describe the generators directly.

If, on the other hand, you extrapolate Moore's Law out to 2015 because it's been reasonably steady up until 2008 - well, Reality is still allowed to say "So what?", to a greater extent than we can expect to wake up one morning and find Mercury in Mars's orbit.  But I wouldn't bet against you, if you just went ahead and drew the graph.

So what's "surface" or "deep" depends on what kind of context shifts you try to extrapolate past.

Robin Hanson said:

Taking a long historical long view, we see steady total growth rates punctuated by rare transitions when new faster growth modes appeared with little warning.  We know of perhaps four such "singularities": animal brains (~600MYA), humans (~2MYA), farming (~1OKYA), and industry (~0.2KYA).  The statistics of previous transitions suggest we are perhaps overdue for another one, and would be substantially overdue in a century.  The next transition would change the growth rate rather than capabilities directly, would take a few years at most, and the new doubling time would be a week to a month.

Why do these transitions occur?  Why have they been similar to each other?  Are the same causes still operating?  Can we expect the next transition to be similar for the same reasons?

One may of course say, "I don't know, I just look at the data, extrapolate the line, and venture this guess - the data is more sure than any hypotheses about causes."  And that will be an interesting projection to make, at least.

But you shouldn't be surprised at all if Reality says "So what?"  I mean - real estate prices went up for a long time, and then they went down.  And that didn't even require a tremendous shift in the underlying nature and causal mechanisms of real estate.

To stick my neck out further:  I am liable to trust the Weak Inside View over a "surface" extrapolation, if the Weak Inside View drills down to a deeper causal level and the balance of support is sufficiently lopsided.

I will go ahead and say, "I don't care if you say that Moore's Law has held for the last hundred years.  Human thought was a primary causal force in producing Moore's Law, and your statistics are all over a domain of human neurons running at the same speed.  If you substitute better-designed minds running at a million times human clock speed, the rate of progress ought to speed up - qualitatively speaking."

That is, the prediction is without giving precise numbers, or supposing that it's still an exponential curve; computation might spike to the limits of physics and then stop forever, etc.  But I'll go ahead and say that the rate of technological progress ought to speed up, given the said counterfactual intervention on underlying causes to increase the thought speed of engineers by a factor of a million.  I'll be downright indignant if Reality says "So what?" and has the superintelligence make slower progress than human engineers instead.  It really does seem like an argument so strong that even Reality ought to be persuaded.

It would be interesting to ponder what kind of historical track records have prevailed in such a clash of predictions - trying to extrapolate "surface" features across shifts in underlying causes without speculating about those underlying causes, versus trying to use the Weak Inside View on those causes and arguing that there is "lopsided" support for a qualitative conclusion; in a case where the two came into conflict...

...kinda hard to think of what that historical case would be, but perhaps I only lack history.

Robin, how surprised would you be if your sequence of long-term exponentials just... didn't continue?  If the next exponential was too fast, or too slow, or something other than an exponential?  To what degree would you be indignant, if Reality said "So what?"

22 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by billswift · 2008-11-18T19:04:12.000Z · LW(p) · GW(p)

Julian Simon, The Great Breakthrough and Its Cause, puts the trigger of the industrial revolution on the total population. After reading it, I think that communicating population size might be a better criterion, that is not just the sheer number of people in an area, but the number of interacting people (non-serfs). Before someone protests the examples of China and India, he points out that until about 1400 China was leading, and goes on to state (my terms not his) that population size is necessary, but may not be sufficient without other factors.

comment by Russell_Wallace · 2008-11-18T19:53:49.000Z · LW(p) · GW(p)

"To stick my neck out further: I am liable to trust the Weak Inside View over a "surface" extrapolation, if the Weak Inside View drills down to a deeper causal level and the balance of support is sufficiently lopsided."

But there's the question of whether the balance of support is sufficiently lopsided, and if so, on which side. Your example illustrates this nicely:

"I will go ahead and say, "I don't care if you say that Moore's Law has held for the last hundred years. Human thought was a primary causal force in producing Moore's Law, and your statistics are all over a domain of human neurons running at the same speed. If you substitute better-designed minds running at a million times human clock speed, the rate of progress ought to speed up - qualitatively speaking.""

What you're not taking into account is that computers are increasingly used to help design and verify the next generation of chips. In other words, a greater amount of machine intelligence is required each generation just to keep the doubling time the same (or only slightly longer), never mind shorter.

Once we appreciate this, we can understand why: as the low hanging fruit is plucked, each new Moore's Law generation has to solve problems that are intrinsically more difficult. But we didn't think of that in advance. It's an explanation in hindsight.

That doesn't mean we can be sure the doubling time will still be 18 to 24 months, 60 years from now. It does mean we have no way to make a better prediction than that. It means that is the prediction on which rationalists should base their plans. Historically, those who based their plans on weak (or even strong) inside predictions of progress faster (or slower) than Moore's Law, like Nelson and Xanadu, or Star Bridge and their hypercomputers, have come to grief. Those who just looked at the graphs, have found success.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-18T20:41:33.000Z · LW(p) · GW(p)

Right, so, this is an example of a disagreement I don't know how to resolve in any systematic way. If Robin comes in and says the same thing as Russell, which I doubt, I wouldn't know how the two of us ought to reconcile if we thought the other was as meta-rational as ourselves.

Basically, you've got - extrapolating Moore's Law on out, as if society's still around in one form or another and still has a smooth global tech progress metric - to where you've got "a billion times human computing power for $1000", whatever that means, which must be at least a million times as fast as a human brain serially because we already have chips that fast (they're just much less parallel).

And you've got Moore's Law continuing past this point at the same sidereal time rate, so that, after another 3,600 rotations of the Earth and ten slow orbits around the sun, computing speeds are a hundred times greater.

It's enough time for 10 million years of thought, if you were only running humans at a million times the clock speed; but this isn't human thought.

But they don't spike to the limits of design and then stop.

Instead, the equivalent of chips are just a hundred times faster, after Earth has swung around in its orbit ten times. Cuz that's Moore's Law. Doubling every eighteen months.

Now, I understand what thought you are performing here. You're thinking, "Nelson and Xanadu tried to second-guess Moore's Law, and they were wrong, so I'm sticking with Moore's Law." And that's where the graph extends. I get that.

But I don't know how to prosecute this disagreement any further. I'm using the Weak Inside View to predict a qualitative speedup. You're just extending the same graph on outward. What do I do with that? To me it just seems that I've reached the point of "Zombies! Zombies?"

comment by Nick_Tarleton · 2008-11-18T20:42:35.000Z · LW(p) · GW(p)
It means that is the prediction on which rationalists should base their plans.

Even if that's the expected value, variance is also crucial.

comment by MZ · 2008-11-18T20:44:09.000Z · LW(p) · GW(p)

The next shift may already have happened. It's called the Internet. But in 1860, nobody saw the burgeoning industrial revolution for what it was. In fact, by today's standards, it was still very inefficient and unproductive. But it created the paradigm shift by which new growth rates could be achieved.

comment by MZ · 2008-11-18T20:46:54.000Z · LW(p) · GW(p)

BTW, there aren't just four such shifts. If you looked closely enough, you could find many more. The evolution of multicellular life. The evolution of sex / genetic exchange. The first tools. Writing. All of these paradigm shifts changed growth rates, although the curve looks rather flat by today's standards.

comment by Tim_Tyler · 2008-11-18T20:50:31.000Z · LW(p) · GW(p)

A serious problem with not quantifying predictions on the temporal axis is that many types of prediction then become unrefutable.

E.g. if you prophesy that we will be able to upload the human mind into a digital substrate, but don't say when, then if 2060 rolls around with no uploads in sight, you can say that the prediction is still correct - it just hasn't happened yet.

Unrefutable predictions have about the same status as unfalsifiable science.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-18T20:58:56.000Z · LW(p) · GW(p)

Tim, that's an obvious problem but it doesn't mean I can magically conjure quantitative predictions out of thin air. If I don't know when AI will go self-improving, should I pretend that I do?

comment by Tim_Tyler · 2008-11-18T21:12:31.000Z · LW(p) · GW(p)
What you're not taking into account is that computers are increasingly used to help design and verify the next generation of chips.

That is taken into account here: "Machines are already heavily involved in the design of other machines. No-one could design a modern CPU without the use of computers. No one could build one without the help of sophisticated machinery."

It does mean we have no way to make a better prediction than that. It means that is the prediction on which rationalists should base their plans.

Not according to Robin Hanson. I'm not sure how much stock I put in an extrapolation from four data points, some of which are millions of years old - but the conclusion seems plausible: we have independent evidence that something big is coming, because we can see it on the horizon.

comment by Russell_Wallace · 2008-11-18T21:12:50.000Z · LW(p) · GW(p)

Now, I should clarify that I don't really expect Moore's Law to continue forever. Obviously the more you extrapolate it, the shakier the prediction becomes. But there is no point at which some other prediction method becomes more reliable. There is no time in the future about which we can say "we will deviate from the graph in this way", because we have no way to see more clearly than the graph.

I don't see any systematic way to resolve this disagreement either, and I think that's because there isn't any. This shouldn't come as a surprise -- if I had a systematic method of resolving all disagreements about the future, I'd be a lot richer than I am! At the end of the day, there's no substitute for putting our heads down, getting on with the work, and seeing who ends up being right.

But this is also an example of why I don't have much truck with Aumann's Agreement Theorem. I'm not disputing the mathematics of course, but I think cases where its assumptions apply, are the exception rather than the rule.

comment by Kevin_Dick · 2008-11-18T21:20:04.000Z · LW(p) · GW(p)

Eliezer, I'm actually a little surprised at that last comment. As a Bayesian, I recognize that reality doesn't care if I feel comfortable with whether or not I "know" an answer. Reality requires me to act on the basis of my current knowledge. If you think AI will go self-improving next year, you should be acting much differently than if you believe it will go self-improving in 2100. The difference isn't as stark at 2025 versus 2075, but it's still there.

What makes your unwillingness to commit even stranger is your advocacy that there's significant existential risk associated with self-improving AI. It's literally a life or death situation by you're own valuation. So how are you going to act, like it will happen sooner or later?

comment by Russell_Wallace · 2008-11-18T21:22:27.000Z · LW(p) · GW(p)

Tim -- I looked at your essay just now, and yes, your Visualization of the Cosmic All seems to agree with mine. (I think Robin's model also has some merit, except that I am not quite so optimistic about the timescales, and I am very much less optimistic about our ability to predict the distant future.)

comment by Vladimir_Nesov · 2008-11-18T21:26:49.000Z · LW(p) · GW(p)

Outside view works as long as you can usefully classify underlying structures using surface properties. It breaks when reality starts to ignore the joints at which you previously carved it. Thus, it's prudent to create big categories, with margins wide enough to capture most black swans.

Qualitative inside view can diverge from reality due to unanticipated circumstances, pointing in the wrong direction as a result. But both outside view and inside view are built on (the same) knowledge, not on reality itself. If qualitative inside view breaks the outside view, it shows a problem: categories of the outside view are not wide enough to capture even this (weakly) anticipated dynamic, when they are supposed to be black swan-proof, to survive things unanticipated. Either the inside view should be shown wrong, given current knowledge, or the outside view should be rebuilt to withstand the inside view.

comment by Michael_Howard · 2008-11-18T21:55:35.000Z · LW(p) · GW(p)

I'll be downright indignant if Reality says "So what?" and has the superintelligence make slower progress than human engineers instead.

Maybe once it's secure from being overtaken by rivals it slows down to really nail down the safety aspect, and make sure the sky isn't tiled with representations of satisfied superintelligent utility functions...

comment by Chris_Hibbert · 2008-11-18T21:58:21.000Z · LW(p) · GW(p)

MZ: I doubt there are many disagreements that there were other interesting inflection points. But Robin's using the best hard data on productivity growth that we have and it's hard to see those inflection points in the data. If someone can think of a way to get higher-resolution data covering those transitions, it would be fascinating to add them to our collection of historical cases.

comment by Tim_Tyler · 2008-11-18T22:07:38.000Z · LW(p) · GW(p)

Publicly not knowing earns humility points - save them and spend them wisely.

However, you're an expert, and people want to know what you think!

So, imagine you have a million valuable tokens, and have to spread them over the next 100 years. Imagine also that the value of the tokens gradually runs into diminishing returns as you get more of them - which makes you somewhat risk-averse. Imagine also that your longevity is independently assured. When some significant-machine-intelligence-milestone is reached, you get the tokens that you previously placed on that year. How would you spread the tokens?

comment by Tim_Tyler · 2008-11-18T22:11:51.000Z · LW(p) · GW(p)
there were other interesting inflection points. Robin's using the best hard data on productivity growth that we have and it's hard to see those inflection points in the data.

That's because the previous transitions occurred before Robin's data set starts.

comment by JamesAndrix · 2008-11-18T22:29:59.000Z · LW(p) · GW(p)

For a while now I've viewed moores law as economically driven. There are generally advantages to having faster computers, but these computers have costs to develop. If there's a technological wall, funding starts going to it even before it's relevant in production chips. If a chipmaker stumbles onto an easy cheap advancement, it still pays to keep just ahead of their competitors, because they'll need money later for the next wall. Moores law is the result of economic pressure to go a bit faster hitting technological barriers. So it's exponential but choppy.

If an AI achieves dominance, Then it won't have competition forcing it to be more efficient, and it will only spend resources optimizing chips if that fits it's goals. (if there's a payoff in it's own internal economy of resources)

Maybe it will run for a hundred years on the chip designs available at the time of its creation, before it decides it needs to improve them.

Not Likely, but faster chips are not a human female in a torn dress.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-11-06T19:09:26.303Z · LW(p) · GW(p)

Upvoted for last sentence.

comment by haig2 · 2008-11-18T23:05:26.000Z · LW(p) · GW(p)

How do periods of stagnant growth, such as extinction level events in earth's history, effect the graphs? As the dinosaurs went extinct, did we jump straight to the start of the mammalian s-curve, or was there a prolonged growth plateau that when averaged out in the combined s-curve meta-graph, doesn't show up as being significant?

A singularity type phase-shift being so steep, even If growth were to grind down in the near future and become stagnant for 100s of years, wouldn't the meta-graph still show an overall fit when averaged out if the singularity occurred after some global catastrophe?

I guess I want to know what effect periods of <= 0 growth have on these meta-graphs.

comment by TGGP4 · 2008-11-19T03:08:01.000Z · LW(p) · GW(p)

The Austrians say that economics can only tell us qualitative rather than quantitative things. That's part of why many people don't take them seriously.

comment by Robin_Hanson2 · 2008-11-19T03:11:34.000Z · LW(p) · GW(p)

It seems reasonable to me to assign a ~1/4-1/2 probability to the previous series not continuing roughly as it has. So it would be only one or two bits of surprise for me.

I suspect it is near time for you to reveal to us your "weak inside view", i.e., the analysis that suggests to you that hand-coded AI is likely to appear in the next few decades, and that is likely to appear in the form of a single machine suddenly able to take over the world.