Biology-Inspired AGI Timelines: The Trick That Never Works

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-12-01T22:35:28.379Z · LW · GW · 142 comments

Contents

  - 1988 -
   - 1999 -
  - 2004 or thereabouts -
  - 2006 or thereabouts - 
  - 2020 -
None
143 comments

- 1988 -

Hans Moravec:  Behold my book Mind Children.  Within, I project that, in 2010 or thereabouts, we shall achieve strong AI.  I am not calling it "Artificial General Intelligence" because this term will not be coined for another 15 years or so.

Eliezer (who is not actually on the record as saying this, because the real Eliezer is, in this scenario, 8 years old; this version of Eliezer has all the meta-heuristics of Eliezer from 2021, but none of that Eliezer's anachronistic knowledge):  Really?  That sounds like a very difficult prediction to make correctly, since it is about the future, which is famously hard to predict.

Imaginary Moravec:  Sounds like a fully general counterargument [? · GW] to me.

Eliezer:  Well, it is, indeed, a fully general counterargument against futurism.  Successfully predicting the unimaginably far future - that is, more than 2 or 3 years out, or sometimes less - is something that human beings seem to be quite bad at, by and large.

Moravec:  I predict that, 4 years from this day, in 1992, the Sun will rise in the east.

Eliezer: Okay, let me qualify that.  Humans seem to be quite bad at predicting the future whenever we need to predict anything at all new and unfamiliar, rather than the Sun continuing to rise every morning until it finally gets eaten.  I'm not saying it's impossible to ever validly predict something novel!  Why, even if that was impossible, how could I know it for sure?  By extrapolating from my own personal inability to make predictions like that?  Maybe I'm just bad at it myself.  But any time somebody claims that some particular novel aspect of the far future is predictable, they justly have a significant burden of prior skepticism to overcome.

More broadly, we should not expect a good futurist to give us a generally good picture of the future.  We should expect a great futurist to single out a few rare narrow aspects of the future which are, somehow, exceptions to the usual rule about the future not being very predictable.

I do agree with you, for example, that we shall at some point see Artificial General Intelligence.  This seems like a rare predictable fact about the future, even though it is about a novel thing which has not happened before: we keep trying to crack this problem, we make progress albeit slowly, the problem must be solvable in principle because human brains solve it, eventually it will be solved; this is not a logical necessity, but it sure seems like the way to bet.  "AGI eventually" is predictable in a way that it is not predictable that, e.g., the nation of Japan, presently upon the rise, will achieve economic dominance over the next decades - to name something else that present-day storytellers of 1988 are talking about.

But timing the novel development correctly?  That is almost never done, not until things are 2 years out, and often not even then.  Nuclear weapons were called, but not nuclear weapons in 1945; heavier-than-air flight was called, but not flight in 1903.  In both cases, people said two years earlier that it wouldn't be done for 50 years - or said, decades too early, that it'd be done shortly.  There's a difference between worrying that we may eventually get a serious global pandemic, worrying that eventually a lab accident may lead to a global pandemic, and forecasting that a global pandemic will start in November of 2019.

Moravec:  You should read my book, my friend, into which I have put much effort.  In particular - though it may sound impossible to forecast, to the likes of yourself - I have carefully examined a graph of computing power in single chips and the most powerful supercomputers over time.  This graph looks surprisingly regular!  Now, of course not all trends can continue forever; but I have considered the arguments that Moore's Law will break down, and found them unconvincing.  My book spends several chapters discussing the particular reasons and technologies by which we might expect this graph to not break down, and continue, such that humanity will have, by 2010 or so, supercomputers which can perform 10 trillion operations per second.*

Oh, and also my book spends a chapter discussing the retina, the part of the brain whose computations we understand in the most detail, in order to estimate how much computing power the human brain is using, arriving at a figure of 10^13 ops/sec.  This neuroscience and computer science may be a bit hard for the layperson to follow, but I assure you that I am in fact an experienced hands-on practitioner in robotics and computer vision.

So, as you can see, we should first get strong AI somewhere around 2010.  I may be off by an order of magnitude in one figure or another; but even if I've made two errors in the same direction, that only shifts the estimate by 7 years or so.

(*)  Moravec just about nailed this part; the actual year was 2008.

Eliezer:  I sure would be amused if we did in fact get strong AI somewhere around 2010, which, for all I know at this point in this hypothetical conversation, could totally happen!  Reversed stupidity is not intelligence, after all, and just because that is a completely broken justification for predicting 2010 doesn't mean that it cannot happen that way.

Moravec:  Really now.  Would you care to enlighten me as to how I reasoned so wrongly?

Eliezer:  Among the reasons why the Future is so hard to predict, in general, is that the sort of answers we want tend to be the products of lines of causality with multiple steps and multiple inputs.  Even when we can guess a single fact that plays some role in producing the Future - which is not of itself all that rare - usually the answer the storyteller wants depends on more facts than that single fact.  Our ignorance of any one of those other facts can be enough to torpedo our whole line of reasoning - in practice, not just as a matter of possibilities.  You could say that the art of exceptions to Futurism being impossible, consists in finding those rare things that you can predict despite being almost entirely ignorant of most concrete inputs into the concrete scenario.  Like predicting that AGI will happen at some point, despite not knowing the design for it, or who will make it, or how.

My own contribution to the Moore's Law literature consists of Moore's Law of Mad Science:  "Every 18 months, the minimum IQ required to destroy the Earth drops by 1 point."  Even if this serious-joke was an absolutely true law, and aliens told us it was absolutely true, we'd still have no ability whatsoever to predict thereby when the Earth would be destroyed, because we'd have no idea what that minimum IQ was right now or at any future time.  We would know that in general the Earth had a serious problem that needed to be addressed, because we'd know in general that destroying the Earth kept on getting easier every year; but we would not be able to time when that would become an imminent emergency, until we'd seen enough specifics that the crisis was already upon us.

In the case of your prediction about strong AI in 2010, I might put it as follows:  The timing of AGI could be seen as a product of three factors, one of which you can try to extrapolate from existing graphs, and two of which you don't know at all.  Ignorance of any one of them is enough to invalidate the whole prediction.

These three factors are:

Or to rephrase:  Depending on how much you and your civilization know about AI-making - how much you know about cognition and computer science - it will take you a variable amount of computing power to build an AI.  If you really knew what you were doing, for example, I confidently predict that you could build a mind at least as powerful as a human mind, while using fewer floating-point operations per second than a human brain is making useful use of -

Chris Humbali:  Wait, did you just say "confidently"?  How could you possibly know that with confidence?  How can you criticize Moravec for being too confident, and then, in the next second, turn around and be confident of something yourself?  Doesn't that make you a massive hypocrite?

Eliezer:  Um, who are you again?

Humbali:  I'm the cousin of Pat Modesto from your previous dialogue on Hero Licensing [LW · GW]!  Pat isn't here in person because "Modesto" looks unfortunately like "Moravec" on a computer screen.  And also their first name looks a bit like "Paul" who is not meant to be referenced either.  So today I shall be your true standard-bearer for good calibration, intellectual humility, the outside view, and reference class forecasting -

Eliezer:  Two of these things are not like the other two, in my opinion; and Humbali and Modesto do not understand how to operate any of the four correctly, in my opinion; but anybody who's read "Hero Licensing [LW · GW]" should already know I believe that.

Humbali:  - and I don't see how Eliezer can possibly be so confident, after all his humble talk of the difficulty of futurism, that it's possible to build a mind 'as powerful as' a human mind using 'less computing power' than a human brain.

Eliezer:  It's overdetermined by multiple lines of inference.  We might first note, for example, that the human brain runs very slowly in a serial sense and tries to make up for that with massive parallelism.  It's an obvious truth of computer science that while you can use 1000 serial operations per second to emulate 1000 parallel operations per second, the reverse is not in general true.

To put it another way: if you had to build a spreadsheet or a word processor on a computer running at 100Hz, you might also need a billion processing cores and massive parallelism in order to do enough cache lookups to get anything done; that wouldn't mean the computational labor you were performing was intrinsically that expensive.  Since modern chips are massively serially faster than the neurons in a brain, and the direction of conversion is asymmetrical, we should expect that there are tasks which are immensely expensive to perform in a massively parallel neural setup, which are much cheaper to do with serial processing steps, and the reverse is not symmetrically true.

A sufficiently adept builder can build general intelligence more cheaply in total operations per second, if they're allowed to line up a billion operations one after another per second, versus lining up only 100 operations one after another.  I don't bother to qualify this with "very probably" or "almost certainly"; it is the sort of proposition that a clear thinker should simply accept as obvious and move on.

Humbali:  And is it certain that neurons can perform only 100 serial steps one after another, then?  As you say, ignorance about one fact can obviate knowledge of any number of others.

Eliezer:  A typical neuron firing as fast as possible can do maybe 200 spikes per second, a few rare neuron types used by eg bats to echolocate can do 1000 spikes per second, and the vast majority of neurons are not firing that fast at any given time.  The usual and proverbial rule in neuroscience - the sort of academically respectable belief I'd expect you to respect even more than I do - is called "the 100-step rule", that any task a human brain (or mammalian brain) can do on perceptual timescales, must be doable with no more than 100 serial steps of computation - no more than 100 things that get computed one after another.  Or even less if the computation is running off spiking frequencies instead of individual spikes.

Moravec:  Yes, considerations like that are part of why I'd defend my estimate of 10^13 ops/sec for a human brain as being reasonable - more reasonable than somebody might think if they were, say, counting all the synapses and multiplying by the maximum number of spikes per second in any neuron.  If you actually look at what the retina is doing, and how it's computing that, it doesn't look like it's doing one floating-point operation per activation spike per synapse.

Eliezer:  There's a similar asymmetry between precise computational operations having a vastly easier time emulating noisy or imprecise computational operations, compared to the reverse - there is no doubt a way to use neurons to compute, say, exact 16-bit integer addition, which is at least more efficient than a human trying to add up 16986+11398 in their heads, but you'd still need more synapses to do that than transistors, because the synapses are noisier and the transistors can just do it precisely.  This is harder to visualize and get a grasp on than the parallel-serial difference, but that doesn't make it unimportant.

Which brings me to the second line of very obvious-seeming reasoning that converges upon the same conclusion - that it is in principle possible to build an AGI much more computationally efficient than a human brain - namely that biology is simply not that efficient, and especially when it comes to huge complicated things that it has started doing relatively recently.

ATP synthase may be close to 100% thermodynamically efficient, but ATP synthase is literally over 1.5 billion years old and a core bottleneck on all biological metabolism.  Brains have to pump thousands of ions in and out of each stretch of axon and dendrite, in order to restore their ability to fire another fast neural spike.  The result is that the brain's computation is something like half a million times less efficient than the thermodynamic limit for its temperature - so around two millionths as efficient as ATP synthase.  And neurons are a hell of a lot older than the biological software for general intelligence!

The software for a human brain is not going to be 100% efficient compared to the theoretical maximum, nor 10% efficient, nor 1% efficient, even before taking into account the whole thing with parallelism vs. serialism, precision vs. imprecision, or similarly clear low-level differences.

Humbali:  Ah!  But allow me to offer a consideration here that, I would wager, you've never thought of before yourself - namely - what if you're wrong?  Ah, not so confident now, are you?

Eliezer:  One observes, over one's cognitive life as a human, which sorts of what-ifs are useful to contemplate, and where it is wiser to spend one's limited resources planning against the alternative that one might be wrong; and I have oft observed that lots of people don't... quite seem to understand how to use 'what if' all that well?  They'll be like, "Well, what if UFOs are aliens, and the aliens are partially hiding from us but not perfectly hiding from us, because they'll seem higher-status if they make themselves observable but never directly interact with us?"

I can refute individual what-ifs like that with specific counterarguments, but I'm not sure how to convey the central generator behind how I know that I ought to refute them.  I am not sure how I can get people to reject these ideas for themselves, instead of them passively waiting for me to come around with a specific counterargument.  My having to counterargue things specifically now seems like a road that never seems to end, and I am not as young as I once was, nor am I encouraged by how much progress I seem to be making.  I refute one wacky idea with a specific counterargument, and somebody else comes along and presents a new wacky idea on almost exactly the same theme.

I know it's probably not going to work, if I try to say things like this, but I'll try to say them anyways.  When you are going around saying 'what-if', there is a very great difference between your map of reality, and the territory of reality, which is extremely narrow and stable.  Drop your phone, gravity pulls the phone downward, it falls.  What if there are aliens and they make the phone rise into the air instead, maybe because they'll be especially amused at violating the rule after you just tried to use it as an example of where you could be confident?  Imagine the aliens watching you, imagine their amusement, contemplate how fragile human thinking is and how little you can ever be assured of anything and ought not to be too confident.  Then drop the phone and watch it fall.  You've now learned something about how reality itself isn't made of what-ifs and reminding oneself to be humble; reality runs on rails stronger than your mind does.

Contemplating this doesn't mean you know the rails, of course, which is why it's so much harder to predict the Future than the past.  But if you see that your thoughts are still wildly flailing around what-ifs, it means that they've failed to gel, in some sense, they are not yet bound to reality, because reality has no binding receptors for what-iffery.

The correct thing to do is not to act on your what-ifs that you can't figure out how to refute, but to go on looking for a model which makes narrower predictions than that.  If that search fails, forge a model which puts some more numerical distribution on your highly entropic uncertainty, instead of diverting into specific what-ifs.  And in the latter case, understand that this probability distribution reflects your ignorance and subjective state of mind, rather than your knowledge of an objective frequency; so that somebody else is allowed to be less ignorant without you shouting "Too confident!" at them.  Reality runs on rails as strong as math; sometimes other people will achieve, before you do, the feat of having their own thoughts run through more concentrated rivers of probability, in some domain.

Now, when we are trying to concentrate our thoughts into deeper, narrower rivers that run closer to reality's rails, there is of course the legendary hazard of concentrating our thoughts into the wrong narrow channels that exclude reality.  And the great legendary sign of this condition, of course, is the counterexample from Reality that falsifies our model!  But you should not in general criticize somebody for trying to concentrate their probability into narrower rivers than yours, for this is the appearance of the great general project of trying to get to grips with Reality, that runs on true rails that are narrower still.

If you have concentrated your probability into different narrow channels than somebody else's, then, of course, you have a more interesting dispute; and you should engage in that legendary activity of trying to find some accessible experimental test on which your nonoverlapping models make different predictions.

Humbali:  I do not understand the import of all this vaguely mystical talk.

Eliezer:  I'm trying to explain why, when I say that I'm very confident it's possible to build a human-equivalent mind using less computing power than biology has managed to use effectively, and you say, "How can you be so confident, what if you are wrong," it is not unreasonable for me to reply, "Well, kid, this doesn't seem like one of those places where it's particularly important to worry about far-flung ways I could be wrong."  Anyone who aspires to learn, learns over a lifetime which sorts of guesses are more likely to go oh-no-wrong in real life, and which sorts of guesses are likely to just work.  Less-learned minds will have minds full of what-ifs they can't refute in more places than more-learned minds; and even if you cannot see how to refute all your what-ifs yourself, it is possible that a more-learned mind knows why they are improbable.  For one must distinguish possibility from probability.

It is imaginable or conceivable that human brains have such refined algorithms that they are operating at the absolute limits of computational efficiency, or within 10% of it.  But if you've spent enough time noticing where Reality usually exercises its sovereign right to yell "Gotcha!" at you, learning which of your assumptions are the kind to blow up in your face and invalidate your final conclusion, you can guess that "Ah, but what if the brain is nearly 100% computationally efficient?" is the sort of what-if that is not much worth contemplating because it is not actually going to be true in real life.  Reality is going to confound you in some other way than that.

I mean, maybe you haven't read enough neuroscience and evolutionary biology that you can see from your own knowledge that the proposition sounds massively implausible and ridiculous.  But it should hardly seem unlikely that somebody else, more learned in biology, might be justified in having more confidence than you.  Phones don't fall up.  Reality really is very stable and orderly in a lot of ways, even in places where you yourself are ignorant of that order.

But if "What if aliens are making themselves visible in flying saucers because they want high status and they'll have higher status if they're occasionally observable but never deign to talk with us?" sounds to you like it's totally plausible, and you don't see how someone can be so confident that it's not true - because oh no what if you're wrong and you haven't seen the aliens so how can you know what they're not thinking - then I'm not sure how to lead you into the place where you can dismiss that thought with confidence.  It may require a kind of life experience that I don't know how to give people, at all, let alone by having them passively read paragraphs of text that I write; a learned, perceptual sense of which what-ifs have any force behind them.  I mean, I can refute that specific scenario, I can put that learned sense into words; but I'm not sure that does me any good unless you learn how to refute it yourself.

Humbali:  Can we leave aside all that meta stuff and get back to the object level?

Eliezer:  This indeed is often wise.

Humbali:  Then here's one way that the minimum computational requirements for general intelligence could be higher than Moravec's argument for the human brain.  Since, after, all, we only have one existence proof that general intelligence is possible at all, namely the human brain.  Perhaps there's no way to get general intelligence in a computer except by simulating the brain neurotransmitter-by-neurotransmitter.  In that case you'd need a lot more computing operations per second than you'd get by calculating the number of potential spikes flowing around the brain!  What if it's true?  How can you know?

(Modern person:  This seems like an obvious straw argument?  I mean, would anybody, even at an earlier historical point, actually make an argument like -

Moravec and Eliezer:  YES THEY WOULD.)

Eliezer:  I can imagine that if we were trying specifically to upload a human that there'd be no easy and simple and obvious way to run the resulting simulation and get a good answer, without simulating neurotransmitter flows in extra detail.

To imagine that every one of these simulated flows is being usefully used in general intelligence and there is no way to simplify the mind design to use fewer computations...  I suppose I could try to refute that specifically, but it seems to me that this is a road which has no end unless I can convey the generator of my refutations.  Your what-iffery is flung far enough that, if I cannot leave even that much rejection as an exercise for the reader to do on their own without my holding their hand, the reader has little enough hope of following the rest; let them depart now, in indignation shared with you, and save themselves further outrage.

I mean, it will obviously be less obvious to the reader because they will know less than I do about this exact domain, it will justly take more work for the reader to specifically refute you than it takes me to refute you.  But I think the reader needs to be able to do that at all, in this example, to follow the more difficult arguments later.

Imaginary Moravec:  I don't think it changes my conclusions by an order of magnitude, but some people would worry that, for example, changes of protein expression inside a neuron in order to implement changes of long-term potentiation, are also important to intelligence, and could be a big deal in the brain's real, effectively-used computational costs.  I'm curious if you'd dismiss that as well, the same way you dismiss the probability that you'd have to simulate every neurotransmitter molecule?

Eliezer:  Oh, of course not.  Long-term potentiation suddenly turning out to be a big deal you overlooked, compared to the depolarization impulses spiking around, is very much the sort of thing where Reality sometimes jumps out and yells "Gotcha!" at you.

Humbali:  How can you tell the difference?

Eliezer:  Experience with Reality yelling "Gotcha!" at myself and historical others.

Humbali:  They seem like equally plausible speculations to me!

Eliezer:  Really?  "What if long-term potentiation is a big deal and computationally important" sounds just as plausible to you as "What if the brain is already close to the wall of making the most efficient possible use of computation to implement general intelligence, and every neurotransmitter molecule matters"?

Humbali:  Yes!  They're both what-ifs we can't know are false and shouldn't be overconfident about denying!

Eliezer:  My tiny feeble mortal mind is far away from reality and only bound to it by the loosest of correlating interactions, but I'm not that unbound from reality.

Moravec:  I would guess that in real life, long-term potentiation is sufficiently slow and local that what goes on inside the cell body of a neuron over minutes or hours is not as big of a computational deal as thousands of times that many spikes flashing around the brain in milliseconds or seconds.  That's why I didn't make a big deal of it in my own estimate.

Eliezer:  Sure.  But it is much more the sort of thing where you wake up to a reality-authored science headline saying "Gotcha!  There were tiny DNA-activation interactions going on in there at high speed, and they were actually pretty expensive and important!"  I'm not saying this exact thing is very probable, just that it wouldn't be out-of-character for reality to say something like that to me, the way it would be really genuinely bizarre if Reality was, like, "Gotcha!  The brain is as computationally efficient of a generally intelligent engine as any algorithm can be!"

Moravec:  I think we're in agreement about that part, or we would've been, if we'd actually had this conversation in 1988.  I mean, I am a competent research roboticist and it is difficult to become one if you are completely unglued from reality.

Eliezer:  Then what's with the 2010 prediction for strong AI, and the massive non-sequitur leap from "the human brain is somewhere around 10 trillion ops/sec" to "if we build a 10 trillion ops/sec supercomputer, we'll get strong AI"?

Moravec:  Because while it's the kind of Fermi estimate that can be off by an order of magnitude in practice, it doesn't really seem like it should be, I don't know, off by three orders of magnitude?  And even three orders of magnitude is just 10 years of Moore's Law.  2020 for strong AI is also a bold and important prediction.

Eliezer:  And the year 2000 for strong AI even more so.

Moravec:  Heh!  That's not usually the direction in which people argue with me.

Eliezer:  There's an important distinction between the direction in which people usually argue with you, and the direction from which Reality is allowed to yell "Gotcha!"  I wish my future self had kept this more in mind, when arguing with Robin Hanson about how well AI architectures were liable to generalize and scale without a ton of domain-specific algorithmic tinkering for every field of knowledge.  I mean, in principle what I was arguing for was various lower bounds on performance, but I sure could have emphasized more loudly that those were lower bounds - well, I did emphasize the lower-bound part, but - from the way I felt when AlphaGo and Alpha Zero and GPT-2 and GPT-3 showed up, I think I must've sorta forgot that myself.

Moravec:  Anyways, if we say that I might be up to three orders of magnitude off and phrase it as 2000-2020, do you agree with my prediction then?

Eliezer:  No, I think you're just... arguing about the wrong facts, in a way that seems to be unglued from most tracks Reality might follow so far as I currently know?  On my view, creating AGI is strongly dependent on how much knowledge you have about how to do it, in a way which almost entirely obviates the relevance of arguments from human biology?

Like, human biology tells us a single not-very-useful data point about how much computing power evolutionary biology needs in order to build a general intelligence, using very alien methods to our own.  Then, very separately, there's the constantly changing level of how much cognitive science, neuroscience, and computer science our own civilization knows.  We don't know how much computing power is required for AGI for any level on that constantly changing graph, and biology doesn't tell us.  All we know is that the hardware requirements for AGI must be dropping by the year, because the knowledge of how to create AI is something that only increases over time.

At some point the moving lines for "decreasing hardware required" and "increasing hardware available" will cross over, which lets us predict that AGI gets built at some point.  But we don't know how to graph two key functions needed to predict that date.  You would seem to be committing the classic fallacy of searching for your keys under the streetlight where the visibility is better.  You know how to estimate how many floating-point operations per second the retina could effectively be using, but this is not the number you need to predict the outcome you want to predict.  You need a graph of human knowledge of computer science over time, and then a graph of how much computer science requires how much hardware to build AI, and neither of these graphs are available.

It doesn't matter how many chapters your book spends considering the continuation of Moore's Law or computation in the retina, and I'm sorry if it seems rude of me in some sense to just dismiss the relevance of all the hard work you put into arguing it.  But you're arguing the wrong facts to get to the conclusion, so all your hard work is for naught.

Humbali:  Now it seems to me that I must chide you for being too dismissive of Moravec's argument.  Fine, yes, Moravec has not established with logical certainty that strong AI must arrive at the point where top supercomputers match the human brain's 10 trillion operations per second.  But has he not established a reference class, the sort of base rate that good and virtuous superforecasters, unlike yourself, go looking for when they want to anchor their estimate about some future outcome?  Has he not, indeed, established the sort of argument which says that if top supercomputers can do only ten million operations per second, we're not very likely to get AGI earlier than that, and if top supercomputers can do ten quintillion operations per second*, we're unlikely not to already have AGI?

(*) In 2021 terms, 10 TPU v4 pods.

Eliezer:  With ranges that wide, it'd be more likely and less amusing to hit somewhere inside it by coincidence.  But I still think this whole line of thoughts is just off-base, and that you, Humbali, have not truly grasped the concept of a virtuous superforecaster or how they go looking for reference classes and base rates.

Humbali:  I frankly think you're just being unvirtuous.  Maybe you have some special model of AGI which claims that it'll arrive in a different year or be arrived at by some very different pathway.  But is not Moravec's estimate a sort of base rate which, to the extent you are properly and virtuously uncertain of your own models, you ought to regress in your own probability distributions over AI timelines?  As you become more uncertain about the exact amounts of knowledge required and what knowledge we'll have when, shouldn't you have an uncertain distribution about AGI arrival times that centers around Moravec's base-rate prediction of 2010?

For you to reject this anchor seems to reveal a grave lack of humility, since you must be very certain of whatever alternate estimation methods you are using in order to throw away this base-rate entirely.

Eliezer:  Like I said, I think you've just failed to grasp the true way of a virtuous superforecaster.  Thinking a lot about Moravec's so-called 'base rate' is just making you, in some sense, stupider; you need to cast your thoughts loose from there and try to navigate a wilder and less tamed space of possibilities, until they begin to gel and coalesce into narrower streams of probability.  Which, for AGI, they probably won't do until we're quite close to AGI, and start to guess correctly how AGI will get built; for it is easier to predict an eventual global pandemic than to say it will start in November of 2019.  Even in October of 2019 this cannot be done.

Humbali:  Then all this uncertainty must somehow be quantified, if you are to be a virtuous Bayesian; and again, for lack of anything better, the resulting distribution should center on Moravec's base-rate estimate of 2010.

Eliezer:  No, that calculation is just basically not relevant here; and thinking about it is making you stupider, as your mind flails in the trackless wilderness grasping onto unanchored air.  Things must be 'sufficiently similar' to each other, in some sense, for us to get a base rate on one thing by looking at another thing.  Humans making an AGI is just too dissimilar to evolutionary biology making a human brain for us to anchor 'how much computing power at the time it happens' from one to the other.  It's not the droid we're looking for; and your attempt to build an inescapable epistemological trap about virtuously calling that a 'base rate' is not the Way.

Imaginary Moravec:  If I can step back in here, I don't think my calculation is zero evidence?  What we know from evolutionary biology is that a blind alien god with zero foresight accidentally mutated a chimp brain into a general intelligence.  I don't want to knock biology's work too much, there's some impressive stuff in the retina, and the retina is just the part of the brain which is in some sense easiest to understand.  But surely there's a very reasonable argument that 10 trillion ops/sec is about the amount of computation that evolutionary biology needed; and since evolution is stupid, when we ourselves have that much computation, it shouldn't be that hard to figure out how to configure it.

Eliezer:  If that was true, the same theory predicts that our current supercomputers should be doing a better job of matching the agility and vision of spiders.  When at some point there's enough hardware that we figure out how to put it together into AGI, we could be doing it with less hardware than a human; we could be doing it with more; and we can't even say that these two possibilities are around equally probable such that our probability distribution should have its median around 2010.  Your number is so bad and obtained by such bad means that we should just throw it out of our thinking and start over.

Humbali:  This last line of reasoning seems to me to be particularly ludicrous, like you're just throwing away the only base rate we have in favor of a confident assertion of our somehow being more uncertain than that.

Eliezer:  Yeah, well, sorry to put it bluntly, Humbali, but you have not yet figured out how to turn your own computing power into intelligence.

 - 1999 -

Luke Muehlhauser reading a previous draft of this (only sounding much more serious than this, because Luke Muehlhauser):  You know, there was this certain teenaged futurist who made some of his own predictions about AI timelines -

Eliezer:  I'd really rather not argue from that as a case in point.  I dislike people who screw up something themselves, and then argue like nobody else could possibly be more competent than they were.  I dislike even more people who change their mind about something when they turn 22, and then, for the rest of their lives, go around acting like they are now Very Mature Serious Adults who believe the thing that a Very Mature Serious Adult believes, so if you disagree with them about that thing they started believing at age 22, you must just need to wait to grow out of your extended childhood.

Luke Muehlhauser (still being paraphrased):  It seems like it ought to be acknowledged somehow.

Eliezer:  That's fair, yeah, I can see how someone might think it was relevant.  I just dislike how it potentially creates the appearance of trying to slyly sneak in an Argument From Reckless Youth that I regard as not only invalid but also incredibly distasteful.  You don't get to screw up yourself and then use that as an argument about how nobody else can do better.

Humbali:  Uh, what's the actual drama being subtweeted here?

Eliezer:  A certain teenaged futurist, who, for example, said in 1999, "The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015."

Humbali:  This young man must surely be possessed of some very deep character defect, which I worry will prove to be of the sort that people almost never truly outgrow except in the rarest cases.  Why, he's not even putting a probability distribution over his mad soothsaying - how blatantly absurd can a person get?

Eliezer:  Dear child ignorant of history, your complaint is far too anachronistic.  This is 1999 we're talking about here; almost nobody is putting probability distributions on things, that element of your later subculture has not yet been introduced.  Eliezer-2002 hasn't been sent a copy of "Judgment Under Uncertainty" by Emil Gilliam.  Eliezer-2006 hasn't put his draft online for "Cognitive biases potentially affecting judgment of global risks".  The Sequences won't start until another year after that.  How would the forerunners of effective altruism in 1999 know about putting probability distributions on forecasts?  I haven't told them to do that yet!  We can give historical personages credit when they seem to somehow end up doing better than their surroundings would suggest; it is unreasonable to hold them to modern standards, or expect them to have finished refining those modern standards by the age of nineteen.

Though there's also a more subtle lesson you could learn, about how this young man turned out to still have a promising future ahead of him; which he retained at least in part by having a deliberate contempt for pretended dignity, allowing him to be plainly and simply wrong in a way that he noticed, without his having twisted himself up to avoid a prospect of embarrassment.  Instead of, for example, his evading such plain falsification by having dignifiedly wide Very Serious probability distributions centered on the same medians produced by the same basically bad thought processes.

But that was too much of a digression, when I tried to write it up; maybe later I'll post something separately.

- 2004 or thereabouts -

Ray Kurzweil in 2001:  I have calculated that matching the intelligence of a human brain requires 2 * 10^16 ops/sec* and this will become available in a $1000 computer in 2023.  26 years after that, in 2049, a $1000 computer will have ten billion times more computing power than a human brain; and in 2059, that computer will cost one cent.

(*) Two TPU v4 pods.

Actual real-life Eliezer in Q&A, when Kurzweil says the same thing in a 2004(?) talk:  It seems weird to me to forecast the arrival of "human-equivalent" AI, and then expect Moore's Law to just continue on the same track past that point for thirty years.  Once we've got, in your terms, human-equivalent AIs, even if we don't go beyond that in terms of intelligence, Moore's Law will start speeding them up.  Once AIs are thinking thousands of times faster than we are, wouldn't that tend to break down the graph of Moore's Law with respect to the objective wall-clock time of the Earth going around the Sun?  Because AIs would be able to spend thousands of subjective years working on new computing technology?

Actual Ray Kurzweil:  The fact that AIs can do faster research is exactly what will enable Moore's Law to continue on track.

Actual Eliezer (out loud):  Thank you for answering my question.

Actual Eliezer (internally):  Moore's Law is a phenomenon produced by human cognition and the fact that human civilization runs off human cognition.  You can't expect the surface phenomenon to continue unchanged after the deep causal phenomenon underlying it starts changing.  What kind of bizarre worship of graphs would lead somebody to think that the graphs were the primary phenomenon and would continue steady and unchanged when the forces underlying them changed massively?  I was hoping he'd be less nutty in person than in the book, but oh well.

- 2006 or thereabouts - 

Somebody on the Internet:  I have calculated the number of computer operations used by evolution to evolve the human brain - searching through organisms with increasing brain size  - by adding up all the computations that were done by any brains before modern humans appeared.  It comes out to 10^43 computer operations.*  AGI isn't coming any time soon!

(*)  I forget the exact figure.  It was 10^40-something.

Eliezer, sighing:  Another day, another biology-inspired timelines forecast.  This trick didn't work when Moravec tried it, it's not going to work while Ray Kurzweil is trying it, and it's not going to work when you try it either.  It also didn't work when a certain teenager tried it, but please entirely ignore that part; you're at least allowed to do better than him.

Imaginary Somebody:  Moravec's prediction failed because he assumed that you could just magically take something with around as much hardware as the human brain and, poof, it would start being around that intelligent -

Eliezer:  Yes, that is one way of viewing an invalidity in that argument.  Though you do Moravec a disservice if you imagine that he could only argue "It will magically emerge", and could not give the more plausible-sounding argument "Human engineers are not that incompetent compared to biology, and will probably figure it out without more than one or two orders of magnitude of extra overhead."

Somebody:  But I am cleverer, for I have calculated the number of computing operations that was used to create and design biological intelligence, not just the number of computing operations required to run it once created!

Eliezer:  And yet, because your reasoning contains the word "biological", it is just as invalid and unhelpful as Moravec's original prediction.

Somebody:  I don't see why you dismiss my biological argument about timelines on the basis of Moravec having been wrong.  He made one basic mistake - neglecting to take into effect the cost to generate intelligence, not just to run it.  I have corrected this mistake, and now my own effort to do biologically inspired timeline forecasting should work fine, and must be evaluated on its own merits, de novo.

Eliezer:  It is true indeed that sometimes a line of inference is doing just one thing wrong, and works fine after being corrected.  And because this is true, it is often indeed wise to reevaluate new arguments on their own merits, if that is how they present themselves.  One may not take the past failure of a different argument or three, and try to hang it onto the new argument like an inescapable iron ball chained to its leg.  It might be the cause for defeasible skepticism, but not invincible skepticism.

That said, on my view, you are making a nearly identical mistake as Moravec, and so his failure remains relevant to the question of whether you are engaging in a kind of thought that binds well to Reality.

Somebody:  And that mistake is just mentioning the word "biology"?

Eliezer:  The problem is that the resource gets consumed differently, so base-rate arguments from resource consumption end up utterly unhelpful in real life.  The human brain consumes around 20 watts of power.  Can we thereby conclude that an AGI should consume around 20 watts of power, and that, when technology advances to the point of being able to supply around 20 watts of power to computers, we'll get AGI?

Somebody:  That's absurd, of course.  So, what, you compare my argument to an absurd argument, and from this dismiss it?

Eliezer:  I'm saying that Moravec's "argument from comparable resource consumption" must be in general invalid [LW · GW], because it Proves Too Much [LW · GW].  If it's in general valid to reason about comparable resource consumption, then it should be equally valid to reason from energy consumed as from computation consumed, and pick energy consumption instead to call the basis of your median estimate.

You say that AIs consume energy in a very different way from brains?  Well, they'll also consume computations in a very different way from brains!  The only difference between these two cases is that you know something about how humans eat food and break it down in their stomachs and convert it into ATP that gets consumed by neurons to pump ions back out of dendrites and axons, while computer chips consume electricity whose flow gets interrupted by transistors to transmit information.  Since you know anything whatsoever about how AGIs and humans consume energy, you can see that the consumption is so vastly different as to obviate all comparisons entirely.

You are ignorant of how the brain consumes computation, you are ignorant of how the first AGIs built would consume computation, but "an unknown key does not open an unknown lock" and these two ignorant distributions should not assert much internal correlation between them.

Even without knowing the specifics of how brains and future AGIs consume computing operations, you ought to be able to reason abstractly about a directional update that you would make, if you knew any specifics instead of none.  If you did know how both kinds of entity consumed computations, if you knew about specific machinery for human brains, and specific machinery for AGIs, you'd then be able to see the enormous vast specific differences between them, and go, "Wow, what a futile resource-consumption comparison to try to use for forecasting."

(Though I say this without much hope; I have not had very much luck in telling people about predictable directional updates they would make, if they knew something instead of nothing about a subject.  I think it's probably too abstract for most people to feel in their gut, or something like that, so their brain ignores it and moves on in the end.  I have had life experience with learning more about a thing, updating, and then going to myself, "Wow, I should've been able to predict in retrospect that learning almost any specific fact would move my opinions in that same direction."  But I worry this is not a common experience, for it involves a real experience of discovery, and preferably more than one to get the generalization.)

Somebody:  All of that seems irrelevant to my novel and different argument.  I am not foolishly estimating the resources consumed by a single brain; I'm estimating the resources consumed by evolutionary biology to invent brains!

Eliezer:  And the humans wracking their own brains and inventing new AI program architectures and deploying those AI program architectures to themselves learn, will consume computations so utterly differently from evolution that there is no point comparing those consumptions of resources.  That is the flaw that you share exactly with Moravec, and that is why I say the same of both of you, "This is a kind of thinking that fails to bind upon reality, it doesn't work in real life."  I don't care how much painstaking work you put into your estimate of 10^43 computations performed by biology.  It's just not a relevant fact.

Humbali:  But surely this estimate of 10^43 cumulative operations can at least be used to establish a base rate for anchoring our -

Eliezer:  Oh, for god's sake, shut up.  At least Somebody is only wrong on the object level, and isn't trying to build an inescapable epistemological trap by which his ideas must still hang in the air like an eternal stench even after they've been counterargued.  Isn't 'but muh base rates' what your viewpoint would've also said about Moravec's 2010 estimate, back when that number still looked plausible?

Humbali:  Of course it is evident to me now that my youthful enthusiasm was mistaken; obviously I tried to estimate the wrong figure.  As Somebody argues, we should have been estimating the biological computations used to design human intelligence, not the computations used to run it.

I see, now, that I was using the wrong figure as my base rate, leading my base rate to be wildly wrong, and even irrelevant; but now that I've seen this, the clear error in my previous reasoning, I have a new base rate.  This doesn't seem obviously to me likely to contain the same kind of wildly invalidating enormous error as before.  What, is Reality just going to yell "Gotcha!" at me again?  And even the prospect of some new unknown error, which is just as likely to be in either possible direction, implies only that we should widen our credible intervals while keeping them centered on a median of 10^43 operations -

Eliezer:  Please stop.  This trick just never works, at all, deal with it and get over it.  Every second of attention that you pay to the 10^43 number is making you stupider.  You might as well reason that 20 watts is a base rate for how much energy the first generally intelligent computing machine should consume.

- 2020 -

OpenPhil:  We have commissioned a Very Serious report on a biologically inspired estimate of how much computation will be required to achieve Artificial General Intelligence, for purposes of forecasting an AGI timeline.  (Summary of report. [LW(p) · GW(p)])  (Full draft of report.)  Our leadership takes this report Very Seriously.

Eliezer:  Oh, hi there, new kids.  Your grandpa is feeling kind of tired now and can't debate this again with as much energy as when he was younger.

Imaginary OpenPhil:  You're not that much older than us.

Eliezer:  Not by biological wall-clock time, I suppose, but -

OpenPhil:  You think thousands of times faster than us?

Eliezer:  I wasn't going to say it if you weren't.

OpenPhil:  We object to your assertion on the grounds that it is false.

Eliezer:  I was actually going to say, you might be underestimating how long I've been walking this endless battlefield because I started really quite young.

I mean, sure, I didn't read Moravec's Mind Children when it came out in 1988.  I only read it four years later, when I was twelve.  And sure, I didn't immediately afterwards start writing online about Moore's Law and strong AI; I did not immediately contribute my own salvos and sallies to the war; I was not yet a noticed voice in the debate.  I only got started on that at age sixteen.  I'd like to be able to say that in 1999 I was just a random teenager being reckless, but in fact I was already being invited to dignified online colloquia about the "Singularity" and mentioned in printed books; when I was being wrong back then I was already doing so in the capacity of a minor public intellectual on the topic.

This is, as I understand normie ways, relatively young, and is probably worth an extra decade tacked onto my biological age; you should imagine me as being 52 instead of 42 as I write this, with a correspondingly greater number of visible gray hairs.

A few years later - though still before your time - there was the Accelerating Change Foundation, and Ray Kurzweil spending literally millions of dollars to push Moore's Law graphs of technological progress as the central story about the future.  I mean, I'm sure that a few million dollars sounds like peanuts to OpenPhil, but if your own annual budget was a hundred thousand dollars or so, that's a hell of a megaphone to compete with.

If you are currently able to conceptualize the Future as being about something other than nicely measurable metrics of progress in various tech industries, being projected out to where they will inevitably deliver us nice things - that's at least partially because of a battle fought years earlier, in which I was a primary fighter, creating a conceptual atmosphere you now take for granted.  A mental world where threshold levels of AI ability are considered potentially interesting and transformative - rather than milestones of new technological luxuries to be checked off on an otherwise invariant graph of Moore's Laws as they deliver flying cars, space travel, lifespan-extension escape velocity, and other such goodies on an equal level of interestingness.  I have earned at least a little right to call myself your grandpa.

And that kind of experience has a sort of compounded interest, where, once you've lived something yourself and participated in it, you can learn more from reading other histories about it.  The histories become more real to you once you've fought your own battles.  The fact that I've lived through timeline errors in person gives me a sense of how it actually feels to be around at the time, watching people sincerely argue Very Serious erroneous forecasts.  That experience lets me really and actually update on the history [LW · GW] of the earlier mistaken timelines from before I was around; instead of the histories just seeming like a kind of fictional novel to read about, disconnected from reality and not happening to real people.

And now, indeed, I'm feeling a bit old and tired for reading yet another report like yours in full attentive detail.  Does it by any chance say that AGI is due in about 30 years from now?

OpenPhil:  Our report has very wide credible intervals around both sides of its median, as we analyze the problem from a number of different angles and show how they lead to different estimates -

Eliezer:  Unfortunately, the thing about figuring out five different ways to guess the effective IQ of the smartest people on Earth, and having three different ways to estimate the minimum IQ to destroy lesser systems such that you could extrapolate a minimum IQ to destroy the whole Earth, and putting wide credible intervals around all those numbers, and combining and mixing the probability distributions to get a new probability distribution, is that, at the end of all that, you are still left with a load of nonsense.  Doing a fundamentally wrong thing in several different ways will not save you, though I suppose if you spread your bets widely enough, one of them may be right by coincidence.

So does the report by any chance say - with however many caveats and however elaborate the probabilistic methods and alternative analyses - that AGI is probably due in about 30 years from now?

OpenPhil:  Yes, in fact, our 2020 report's median estimate is 2050; though, again, with very wide credible intervals around both sides.  Is that number significant?

Eliezer:  It's a law generalized by Charles Platt, that any AI forecast will put strong AI thirty years out from when the forecast is made.  Vernor Vinge referenced it in the body of his famous 1993 NASA speech, whose abstract begins, "Within thirty years, we will have the technological means to create superhuman intelligence.  Shortly after, the human era will be ended."

After I was old enough to be more skeptical of timelines myself, I used to wonder how Vinge had pulled out the "within thirty years" part.  This may have gone over my head at the time, but rereading again today, I conjecture Vinge may have chosen the headline figure of thirty years as a deliberately self-deprecating reference to Charles Platt's generalization about such forecasts always being thirty years from the time they're made, which Vinge explicitly cites later in the speech.

Or to put it another way:  I conjecture that to the audience of the time, already familiar with some previously-made forecasts about strong AI, the impact of the abstract is meant to be, "Never mind predicting strong AI in thirty years, you should be predicting superintelligence in thirty years, which matters a lot more."  But the minds of authors are scarcely more knowable than the Future, if they have not explicitly told us what they were thinking; so you'd have to ask Professor Vinge, and hope he remembers what he was thinking back then.

OpenPhil:  Superintelligence before 2023, huh?  I suppose Vinge still has two years left to go before that's falsified.

Eliezer:  Also in the body of the speech, Vinge says, "I'll be surprised if this event occurs before 2005 or after 2030," which sounds like a more serious and sensible way of phrasing an estimate.  I think that should supersede the probably Platt-inspired headline figure for what we think of as Vinge's 1993 prediction.  The jury's still out on whether Vinge will have made a good call.

Oh, and sorry if grandpa is boring you with all this history from the times before you were around.  I mean, I didn't actually attend Vinge's famous NASA speech when it happened, what with being thirteen years old at the time, but I sure did read it later.  Once it was digitized and put online, it was all over the Internet.  Well, all over certain parts of the Internet, anyways.  Which nerdy parts constituted a much larger fraction of the whole, back when the World Wide Web was just starting to take off among early adopters.

But, yeah, the new kids showing up with some graphs of Moore's Law and calculations about biology and an earnest estimate of strong AI being thirty years out from the time of the report is, uh, well, it's... historically precedented.

OpenPhil:  That part about Charles Platt's generalization is interesting, but just because we unwittingly chose literally exactly the median that Platt predicted people would always choose in consistent error, that doesn't justify dismissing our work, right?  We could have used a completely valid method of estimation which would have pointed to 2050 no matter which year it was tried in, and, by sheer coincidence, have first written that up in 2020.  In fact, we try to show in the report that the same methodology, evaluated in earlier years, would also have pointed to around 2050 -

Eliezer:  Look, people keep trying this.  It's never worked.  It's never going to work.  2 years before the end of the world, there'll be another published biologically inspired estimate showing that AGI is 30 years away and it will be exactly as informative then as it is now.  I'd love to know the timelines too, but you're not going to get the answer you want until right before the end of the world, and maybe not even then unless you're paying very close attention.  Timing this stuff is just plain hard.

OpenPhil:  But our report is different, and our methodology for biologically inspired estimates is wiser and less naive than those who came before.

Eliezer:  That's what the last guy said, but go on.

OpenPhil:  First, we carefully estimate a range of possible figures for the equivalent of neural-network parameters needed to emulate a human brain.  Then, we estimate how many examples would be required to train a neural net with that many parameters.  Then, we estimate the total computational cost of that many training runs.  Moore's Law then gives us 2050 as our median time estimate, given what we think are the most likely underlying assumptions, though we do analyze it several different ways.

Eliezer:  This is almost exactly what the last guy tried, except you're using network parameters instead of computing ops, and deep learning training runs instead of biological evolution.

OpenPhil:  Yes, so we've corrected his mistake of estimating the wrong biological quantity and now we're good, right?

Eliezer:  That's what the last guy thought he'd done about Moravec's mistaken estimation target.  And neither he nor Moravec would have made much headway on their underlying mistakes, by doing a probabilistic analysis of that same wrong question from multiple angles.

OpenPhil:  Look, sometimes more than one person makes a mistake, over historical time.  It doesn't mean nobody can ever get it right.  You of all people should agree.

Eliezer:  I do so agree, but that doesn't mean I agree you've fixed the mistake.  I think the methodology itself is bad, not just its choice of which biological parameter to estimate.  Look, do you understand why the evolution-inspired estimate of 10^43 ops was completely ludicrous; and the claim that it was equally likely to be mistaken in either direction, even more ludicrous?

OpenPhil:  Because AGI isn't like biology, and in particular, will be trained using gradient descent instead of evolutionary search, which is cheaper.  We do note inside our report that this is a key assumption, and that, if it fails, the estimate might be correspondingly wrong -

Eliezer:  But then you claim that mistakes are equally likely in both directions and so your unstable estimate is a good median.  Can you see why the previous evolutionary estimate of 10^43 cumulative ops was not, in fact, equally likely to be wrong in either direction?  That it was, predictably, a directional overestimate?

OpenPhil:  Well, search by evolutionary biology is more costly than training by gradient descent, so in hindsight, it was an overestimate.  Are you claiming this was predictable in foresight instead of hindsight?

Eliezer:  I'm claiming that, at the time, I snorted and tossed Somebody's figure out the window while thinking it was ridiculously huge and absurd, yes.

OpenPhil:  Because you'd already foreseen in 2006 that gradient descent would be the method of choice for training future AIs, rather than genetic algorithms?

Eliezer:  Ha!  No.  Because it was an insanely costly hypothetical approach whose main point of appeal, to the sort of person who believed in it, was that it didn't require having any idea whatsoever of what you were doing or how to design a mind.

OpenPhil:  Suppose one were to reply:  "Somebody" didn't know better-than-evolutionary methods for designing a mind, just as we currently don't know better methods than gradient descent for designing a mind; and hence Somebody's estimate was the best estimate at the time, just as ours is the best estimate now?

Eliezer:  Unless you were one of a small handful of leading neural-net researchers who knew a few years ahead of the world where scientific progress was heading - who knew a Thielian 'secret' before finding evidence strong enough to convince the less foresightful - you couldn't have called the jump specifically to gradient descent rather than any other technique.  "I don't know any more computationally efficient way to produce a mind than re-evolving the cognitive history of all life on Earth" transitioning over time to "I don't know any more computationally efficient way to produce a mind than gradient descent over entire brain-sized models" is not predictable in the specific part about "gradient descent" - not unless you know a Thielian secret.

But knowledge is a ratchet that usually only turns one way, so it's predictable that the current story changes to somewhere over future time, in a net expected direction.  Let's consider the technique currently known as mixture-of-experts (MoE), for training smaller nets in pieces and muxing them together.  It's not my mainline prediction that MoE actually goes anywhere - if I thought MoE was actually promising, I wouldn't call attention to it, of course!  I don't want to make timelines shorter, that is not a service to Earth, not a good sacrifice in the cause of winning an Internet argument.

But if I'm wrong and MoE is not a dead end, that technique serves as an easily-visualizable case in point.  If that's a fruitful avenue, the technique currently known as "mixture-of-experts" will mature further over time, and future deep learning engineers will be able to further perfect the art of training slices of brains using gradient descent and fewer examples, instead of training entire brains using gradient descent and lots of examples.

Or, more likely, it's not MoE that forms the next little trend.  But there is going to be something, especially if we're sitting around waiting until 2050.  Three decades is enough time for some big paradigm shifts in an intensively researched field.  Maybe we'd end up using neural net tech very similar to today's tech if the world ends in 2025, but in that case, of course, your prediction must have failed somewhere else.

The three components of AGI arrival times are available hardware, which increases over time in an easily graphed way; available knowledge, which increases over time in a way that's much harder to graph; and hardware required at a given level of specific knowledge, a huge multidimensional unknown background parameter.  The fact that you have no idea how to graph the increase of knowledge - or measure it in any way that is less completely silly than "number of science papers published" or whatever such gameable metric - doesn't change the point that this is a predictable fact about the future; there will be more knowledge later, the more time that passes, and that will directionally change the expense of the currently least expensive way of doing things.

OpenPhil:  We did already consider that and try to take it into account: our model already includes a parameter for how algorithmic progress reduces hardware requirements.  It's not easy to graph as exactly as Moore's Law, as you say, but our best-guess estimate is that compute costs halve every 2-3 years.

Eliezer:  Oh, nice.  I was wondering what sort of tunable underdetermined parameters enabled your model to nail the psychologically overdetermined final figure of '30 years' so exactly.

OpenPhil:  Eliezer.

Eliezer:  Think of this in an economic sense: people don't buy where goods are most expensive and delivered latest, they buy where goods are cheapest and delivered earliest.  Deep learning researchers are not like an inanimate chunk of ice tumbling through intergalactic space in its unchanging direction of previous motion; they are economic agents who look around for ways to destroy the world faster and more cheaply than the way that you imagine as the default.  They are more eager than you are to think of more creative paths to get to the next milestone faster.

OpenPhil:  Isn't this desire for cheaper methods exactly what our model already accounts for, by modeling algorithmic progress?

Eliezer:  The makers of AGI aren't going to be doing 10,000,000,000,000 rounds of gradient descent, on entire brain-sized 300,000,000,000,000-parameter models, algorithmically faster than today.  They're going to get to AGI via some route that you don't know how to take, at least if it happens in 2040.  If it happens in 2025, it may be via a route that some modern researchers do know how to take, but in this case, of course, your model was also wrong.

They're not going to be taking your default-imagined approach algorithmically faster, they're going to be taking an algorithmically different approach that eats computing power in a different way than you imagine it being consumed.

OpenPhil:  Shouldn't that just be folded into our estimate of how the computation required to accomplish a fixed task decreases by half every 2-3 years due to better algorithms?

Eliezer:  Backtesting this viewpoint on the previous history of computer science, it seems to me to assert that it should be possible to:

For reference, recall that in 2006, Hinton and Salakhutdinov were just starting to publish that, by training multiple layers of Restricted Boltzmann machines and then unrolling them into a "deep" neural network, you could get an initialization for the network weights that would avoid the problem of vanishing and exploding gradients and activations.  At least so long as you didn't try to stack too many layers, like a dozen layers or something ridiculous like that.  This being the point that kicked off the entire deep-learning revolution.

Your model apparently suggests that we have gotten around 50 times more efficient at turning computation into intelligence since that time; so, we should be able to replicate any modern feat of deep learning performed in 2021, using techniques from before deep learning and around fifty times as much computing power.

OpenPhil:  No, that's totally not what our viewpoint says when you backfit it to past reality.  Our model does a great job of retrodicting past reality.

Eliezer:  How so?

OpenPhil:  <Eliezer cannot predict what they will say here.>

Eliezer:  I'm not convinced by this argument. 

OpenPhil:  We didn't think you would be; you're sort of predictable that way.

Eliezer:  Well, yes, if I'd predicted I'd update from hearing your argument, I would've updated already.  I may not be a real Bayesian but I'm not that incoherent.

But I can guess in advance at the outline of my reply, and my guess is this:

"Look, when people come to me with models claiming the future is predictable enough for timing, I find that their viewpoints seem to me like they would have made garbage predictions if I actually had to operate them in the past without benefit of hindsight.  Sure, with benefit of hindsight, you can look over a thousand possible trends and invent rules of prediction and event timing that nobody in the past actually spotlighted then, and claim that things happened on trend.  I was around at the time and I do not recall people actually predicting the shape of AI in the year 2020 in advance.  I don't think they were just being stupid either.

"In a conceivable future where people are still alive and reasoning as modern humans do in 2040, somebody will no doubt look back and claim that everything happened on trend since 2020; but which trend the hindsighter will pick out is not predictable to us in advance.

"It may be, of course, that I simply don't understand how to operate your viewpoint, nor how to apply it to the past or present or future; and that yours is a sort of viewpoint which indeed permits saying only one thing, and not another; and that this viewpoint would have predicted the past wonderfully, even without any benefit of hindsight.  But there is also that less charitable viewpoint which suspects that somebody's theory of 'A coinflip always comes up heads on occasions X' contains some informal parameters which can be argued about which occasions exactly 'X' describes, and that the operation of these informal parameters is a bit influenced by one's knowledge of whether a past coinflip actually came up heads or not.

"As somebody who doesn't start from the assumption that your viewpoint is a good fit to the past, I still don't see how a good fit to the past could've been extracted from it without benefit of hindsight."

OpenPhil:  That's a pretty general counterargument, and like any pretty general counterargument it's a blade you should try turning against yourself.  Why doesn't your own viewpoint horribly mispredict the past, and say that all estimates of AGI arrival times are predictably net underestimates?  If we imagine trying to operate your own viewpoint in 1988, we imagine going to Moravec and saying, "Your estimate of how much computing power it takes to match a human brain is predictably an overestimate, because engineers will find a better way to do it than biology, so we should expect AGI sooner than 2010."

Eliezer:  I did tell Imaginary Moravec that his estimate of the minimum computation required for human-equivalent general intelligence was predictably an overestimate; that was right there in the dialogue before I even got around to writing this part.  And I also, albeit with benefit of hindsight, told Moravec that both of these estimates were useless for timing the future, because they skipped over the questions of how much knowledge you'd need to make an AGI with a given amount of computing power, how fast knowledge was progressing, and the actual timing determined by the rising hardware line touching the falling hardware-required line.

OpenPhil:  We don't see how to operate your viewpoint to say in advance to Moravec, before his prediction has been falsified, "Your estimate is plainly a garbage estimate" instead of "Your estimate is obviously a directional underestimate", especially since you seem to be saying the latter to us, now.

Eliezer:  That's not a critique I give zero weight.  And, I mean, as a kid, I was in fact talking like, "To heck with that hardware estimate, let's at least try to get it done before then.  People are dying for lack of superintelligence; let's aim for 2005."  I had a T-shirt spraypainted "Singularity 2005" at a science fiction convention, it's rather crude but I think it's still in my closet somewhere.

But now I am older and wiser and have fixed all my past mistakes, so the critique of those past mistakes no longer applies to my new arguments.

OpenPhil:  Uh huh.

Eliezer:  I mean, I did try to fix all the mistakes that I knew about, and didn't just, like, leave those mistakes in forever?  I realize that this claim to be able to "learn from experience" is not standard human behavior in situations like this, but if you've got to be weird, that's a good place to spend your weirdness points.  At least by my own lights, I am now making a different argument than I made when I was nineteen years old, and that different argument should be considered differently.

And, yes, I also think my nineteen-year-old self was not completely foolish at least about AI timelines; in the sense that, for all he knew, maybe you could build AGI by 2005 if you tried really hard over the next 6 years.  Not so much because Moravec's estimate should've been seen as a predictable overestimate of how much computing power would actually be needed, given knowledge that would become available in the next 6 years; but because Moravec's estimate should've been seen as almost entirely irrelevant, making the correct answer be "I don't know."

OpenPhil:  It seems to us that Moravec's estimate, and the guess of your nineteen-year-old past self, are both predictably vast underestimates.  Estimating the computation consumed by one brain, and calling that your AGI target date, is obviously predictably a vast underestimate because it neglects the computation required for training a brainlike system.  It may be a bit uncharitable, but we suggest that Moravec and your nineteen-year-old self may both have been motivatedly credulous, to not notice a gap so very obvious.

Eliezer:  I could imagine it seeming that way if you'd grown up never learning about any AI techniques except deep learning, which had, in your wordless mental world, always been the way things were, and would always be that way forever.

I mean, it could be that deep learning will still be the bleeding-edge method of Artificial Intelligence right up until the end of the world.  But if so, it'll be because Vinge was right and the world ended before 2030, not because the deep learning paradigm was as good as any AI paradigm can ever get.  That is simply not a kind of thing that I expect Reality to say "Gotcha" to me about, any more than I expect to be told that the human brain, whose neurons and synapses are 500,000 times further away from the thermodynamic efficiency wall than ATP synthase, is the most efficient possible consumer of computations.

The specific perspective-taking operation needed here - when it comes to what was and wasn't obvious in 1988 or 1999 - is that the notion of spending thousands and millions and billions of times as much computation on a "training" phase, as on an "inference" phase, is something that only came to be seen as Always Necessary after the deep learning revolution took over AI in the late Noughties.  Back when Moravec was writing, you programmed a game-tree-search algorithm for chess, and then you ran that code, and it played chess.  Maybe you needed to add an opening book, or do a lot of trial runs to tweak the exact values the position evaluation function assigned to knights vs. bishops, but most AIs weren't neural nets and didn't get trained on enormous TPU pods.

Moravec had no way of knowing that the paradigm in AI would, twenty years later, massively shift to a new paradigm in which stuff got trained on enormous TPU pods.  He lived in a world where you could only train neural networks a few layers deep, like, three layers, and the gradients vanished or exploded if you tried to train networks any deeper.

To be clear, in 1999, I did think of AGIs as needing to do a lot of learning; but I expected them to be learning while thinking, not to learn in a separate gradient descent phase.

OpenPhil:  How could anybody possibly miss anything so obvious?  There's so many basic technical ideas and even philosophical ideas about how you do AI which make it supremely obvious that the best and only way to turn computation into intelligence is to have deep nets, lots of parameters, and enormous separate training phases on TPU pods.

Eliezer:  Yes, well, see, those philosophical ideas were not as prominent in 1988, which is why the direction of the future paradigm shift was not predictable in advance without benefit of hindsight, let alone timeable to 2006.

You're also probably overestimating how much those philosophical ideas would pinpoint the modern paradigm of gradient descent even if you had accepted them wholeheartedly, in 1988.  Or let's consider, say, October 2006, when the Netflix Prize was being run - a watershed occasion where lots of programmers around the world tried their hand at minimizing a loss function, based on a huge-for-the-times 'training set' that had been publicly released, scored on a holdout 'test set'.  You could say it was the first moment in the limelight for the sort of problem setup that everybody now takes for granted with ML research: a widely shared dataset, a heldout test set, a loss function to be minimized, prestige for advancing the 'state of the art'.  And it was a million dollars, which, back in 2006, was big money for a machine learning prize, garnering lots of interest from competent competitors.

Before deep learning, "statistical learning" was indeed a banner often carried by the early advocates of the view that Richard Sutton now calls the Bitter Lesson, along the lines of "complicated programming of human ideas doesn't work, you have to just learn from massive amounts of data".

But before deep learning - which was barely getting started in 2006 - "statistical learning" methods that took in massive amounts of data, did not use those massive amounts of data to train neural networks by stochastic gradient descent across millions of examples!  In 2007, the winning submission to the Netflix Prize was an ensemble predictor that incorporated k-Nearest-Neighbor, a factorization method that repeatedly globally minimized squared error, two-layer Restricted Boltzmann Machines, and a regression model akin to Principal Components Analysis.  Which is all 100% statistical learning driven by relatively-big-for-the-time "big data", and 0% GOFAI.  But these methods didn't involve enormous massive training phases in the modern sense.

Back then, if you were doing stochastic gradient descent at all, you were doing it on a much smaller neural network.  Not so much because you couldn't afford more compute for a larger neural network, but because wider neural networks didn't help you much and deeper neural networks simply didn't work.

Bleeding-edge statistical learning techniques as late as 2007, to make actual use of big data, had to find other ways to make use of huge amounts of data than gradient descent and backpropagation.  Though, I mean, not huge amounts of data by modern standards.  The winning submission to the Netflix Prize used an ensemble of 107 models - that's not a misprint for 10^7, I actually mean 107 - which models were drawn from half a different model classes, then proliferated with slightly different parameters, averaged together to reduce statistical noise.

A modern kid, perhaps, looks at this and thinks:  "If you can afford the compute to train 107 models, why not just train one larger model?"  But back then, you see, there just wasn't a standard way to dump massively more compute into something, and get better results back out.  The fact that they had 107 differently parameterized models from a half-dozen families averaged together to reduce noise, was about as well as anyone could do in 2007, at putting more effort in and getting better results back out.

OpenPhil:  How quaint and archaic!  But that was 13 years ago, before time actually got started and history actually started happening in real life.  Now we've got the paradigm which will actually be used to create AGI, in all probability; so estimation methods centered on that paradigm should be valid.

Eliezer:  The current paradigm is definitely not the end of the line in principle.  I guarantee you that the way superintelligences build cognitive engines is not by training enormous neural networks using gradient descent.  Gua-ran-tee it.

The fact that you think you now see a path to AGI, is because today - unlike in 2006 - you have a paradigm that is seemingly willing to entertain having more and more food stuffed down its throat without obvious limit (yet).  This is really a quite recent paradigm shift, though, and it is probably not the most efficient possible way to consume more and more food.

You could rather strongly guess, early on, that support vector machines were never going to give you AGI, because you couldn't dump more and more compute into training or running SVMs and get arbitrarily better answers; whatever gave you AGI would have to be something else that could eat more compute productively.

Similarly, since the path through genetic algorithms and recapitulating the whole evolutionary history would have taken a lot of compute, it's no wonder that other, more efficient methods of eating compute were developed before then; it was obvious in advance that they must exist, for all that some what-iffed otherwise.

To be clear, it is certain the world will end by more inefficient methods than those that superintelligences would use; since, if superintelligences are making their own AI systems, then the world has already ended.

And it is possible, even, that the world will end by a method as inefficient as gradient descent.  But if so, that will be because the world ended too soon for any more efficient paradigm to be developed.  Which, on my model, means the world probably ended before say 2040(???).  But of course, compared to how much I think I know about what must be more efficiently doable in principle, I think I know far less about the speed of accumulation of real knowledge (not to be confused with proliferation of publications), or how various random-to-me social phenomena could influence the speed of knowledge.  So I think I have far less ability to say a confident thing about the timing of the next paradigm shift in AI, compared to the existence and eventuality of such paradigms in the space of possibilities.

OpenPhil:  But if you expect the next paradigm shift to happen in around 2040, shouldn't you confidently predict that AGI has to arrive after 2040, because, without that paradigm shift, we'd have to produce AGI using deep learning paradigms, and in that case our own calculation would apply saying that 2040 is relatively early?

Eliezer:  No, because I'd consider, say, improved mixture-of-experts techniques that actually work, to be very much within the deep learning paradigm; and even a relatively small paradigm shift like that would obviate your calculations, if it produced a more drastic speedup than halving the computational cost over two years.

More importantly, I simply don't believe in your attempt to calculate a figure of 10,000,000,000,000,000 operations per second for a brain-equivalent deepnet based on biological analogies, or your figure of 10,000,000,000,000 training updates for it.  I simply don't believe in it at all.  I don't think it's a valid anchor.  I don't think it should be used as the median point of a wide uncertain distribution.  The first-developed AGI will consume computation in a different fashion, much as it eats energy in a different fashion; and "how much computation an AGI needs to eat compared to a human brain" and "how many watts an AGI needs to eat compared to a human brain" are equally always decreasing with the technology and science of the day.

OpenPhil:  Doesn't our calculation at least provide a soft upper bound on how much computation is required to produce human-level intelligence?  If a calculation is able to produce an upper bound on a variable, how can it be uninformative about that variable?

Eliezer:  You assume that the architecture you're describing can, in fact, work at all to produce human intelligence.  This itself strikes me as not only tentative but probably false.  I mostly suspect that if you take the exact GPT architecture, scale it up to what you calculate as human-sized, and start training it using current gradient descent techniques... what mostly happens is that it saturates and asymptotes its loss function at not very far beyond the GPT-3 level - say, it behaves like GPT-4 would, but not much better.

This is what should have been told to Moravec:  "Sorry, even if your biology is correct, the assumption that future people can put in X amount of compute and get out Y result is not something you really know."  And that point did in fact just completely trash his ability to predict and time the future.

The same must be said to you.  Your model contains supposedly known parameters, "how much computation an AGI must eat per second, and how many parameters must be in the trainable model for that, and how many examples are needed to train those parameters".  Relative to whatever method is actually first used to produce AGI, I expect your estimates to be wildly inapplicable, as wrong as Moravec was about thinking in terms of just using one supercomputer powerful enough to be a brain.  Your parameter estimates may not be about properties that the first successful AGI design even has.  Why, what if it contains a significant component that isn't a neural network?  I realize this may be scarcely conceivable to somebody from the present generation, but the world was not always as it was now, and it will change if it does not end.

OpenPhil:  I don't understand how some of your reasoning could be internally consistent even on its own terms.  If, according to you, our 2050 estimate doesn't provide a soft upper bound on AGI arrival times - or rather, if our 2050-centered probability distribution isn't a soft upper bound on reasonable AGI arrival probability distributions - then I don't see how you can claim that the 2050-centered distribution is predictably a directional overestimate.

You can either say that our forecasted pathway to AGI or something very much like it would probably work in principle without requiring very much more computation than our uncertain model components take into account, meaning that the probability distribution provides a soft upper bound on reasonably-estimable arrival times, but that paradigm shifts will predictably provide an even faster way to do it before then.  That is, you could say that our estimate is both a soft upper bound and also a directional overestimate.  Or, you could say that our ignorance of how to create AI will consume more than one order-of-magnitude of increased computation cost above biology -

Eliezer:  Indeed, much as your whole proposal would supposedly cost ten trillion times the equivalent computation of the single human brain that earlier biologically-inspired estimates anchored on.

OpenPhil:  - in which case our 2050-centered distribution is not a good soft upper bound, but also not predictably a directional overestimate.  Don't you have to pick one or the other as a critique, there?

Eliezer:  Mmm... there's some justice to that, now that I've come to write out this part of the dialogue.  Okay, let me revise my earlier stated opinion:  I think that your biological estimate is a trick that never works and, on its own terms, would tell us very little about AGI arrival times at all.  Separately, I think from my own model that your timeline distributions happen to be too long.

OpenPhil:  Eliezer.

Eliezer:  I mean, in fact, part of my actual sense of indignation at this whole affair, is the way that Platt's law of strong AI forecasts - which was in the 1980s generalizing "thirty years" as the time that ends up sounding "reasonable" to would-be forecasters - is still exactly in effect for what ends up sounding "reasonable" to would-be futurists, in fricking 2020 while the air is filling up with AI smoke in the silence of nonexistent fire alarms.

But to put this in terms that maybe possibly you'd find persuasive:

The last paradigm shifts were from "write a chess program that searches a search tree and run it, and that's how AI eats computing power" to "use millions of data samples, but not in a way that requires a huge separate training phase" to "train a huge network for zillions of gradient descent updates and then run it".  This new paradigm costs a lot more compute, but (small) large amounts of compute are now available so people are using them; and this new paradigm saves on programmer labor, and more importantly the need for programmer knowledge.

I say with surety that this is not the last possible paradigm shift.  And furthermore, the Stack More Layers paradigm has already reduced need for knowledge by what seems like a pretty large bite out of all the possible knowledge that could be thrown away.

So, you might then argue, the world-ending AGI seems more likely to incorporate more knowledge and less brute force, which moves the correct sort of timeline estimate further away from the direction of "cost to recapitulate all evolutionary history as pure blind search without even the guidance of gradient descent" and more toward the direction of "computational cost of one brain, if you could just make a single brain".

That is, you can think of there as being two biological estimates to anchor on, not just one.  You can imagine there being a balance that shifts over time from "the computational cost for evolutionary biology to invent brains" to "the computational cost to run one biological brain".

In 1960, maybe, they knew so little about how brains worked that, if you gave them a hypercomputer, the cheapest way they could quickly get AGI out of the hypercomputer using just their current knowledge, would be to run a massive evolutionary tournament over computer programs until they found smart ones, using 10^43 operations.

Today, you know about gradient descent, which finds programs more efficiently than genetic hill-climbing does; so the balance of how much hypercomputation you'd need to use to get general intelligence using just your own personal knowledge, has shifted ten orders of magnitude away from the computational cost of evolutionary history and towards the lower bound of the computation used by one brain.  In the future, this balance will predictably swing even further towards Moravec's biological anchor, further away from Somebody on the Internet's biological anchor.

I admit, from my perspective this is nothing but a clever argument that tries to persuade people who are making errors that can't all be corrected by me, so that they can make mostly the same errors but get a slightly better answer.  In my own mind I tend to contemplate the Textbook from the Future, which would tell us how to build AI on a home computer from 1995, as my anchor of 'where can progress go', rather than looking to the brain of all computing devices for inspiration.

But, if you insist on the error of anchoring on biology, you could perhaps do better by seeing a spectrum between two bad anchors.  This lets you notice a changing reality, at all, which is why I regard it as a helpful thing to say to you and not a pure persuasive superweapon of unsound argument.  Instead of just fixating on one bad anchor, the hybrid of biological anchoring with whatever knowledge you currently have about optimization, you can notice how reality seems to be shifting between two biological bad anchors over time, and so have an eye on the changing reality at all.  Your new estimate in terms of gradient descent is stepping away from evolutionary computation and toward the individual-brain estimate by ten orders of magnitude, using the fact that you now know a little more about optimization than natural selection knew; and now that you can see the change in reality over time, in terms of the two anchors, you can wonder if there are more shifts ahead.

Realistically, though, I would not recommend eyeballing how much more knowledge you'd think you'd need to get even larger shifts, as some function of time, before that line crosses the hardware line.  Some researchers may already know Thielian secrets you do not, that take those researchers further toward the individual-brain computational cost (if you insist on seeing it that way).  That's the direction that economics rewards innovators for moving in, and you don't know everything the innovators know in their labs.

When big inventions finally hit the world as newspaper headlines, the people two years before that happens are often declaring it to be fifty years away; and others, of course, are declaring it to be two years away, fifty years before headlines.  Timing things is quite hard even when you think you are being clever; and cleverly having two biological anchors and eyeballing Reality's movement between them, is not the sort of cleverness that gives you good timing information in real life.

In real life, Reality goes off and does something else instead, and the Future does not look in that much detail like the futurists predicted.  In real life, we come back again to the same wiser-but-sadder conclusion given at the start, that in fact the Future is quite hard to foresee - especially when you are not on literally the world's leading edge of technical knowledge about it, but really even then.  If you don't think you know any Thielian secrets about timing, you should just figure that you need a general policy which doesn't get more than two years of warning, or not even that much if you aren't closely non-dismissively analyzing warning signs.

OpenPhil:  We do consider in our report the many ways that our estimates could be wrong, and show multiple ways of producing biologically inspired estimates that give different results.  Does that give us any credit for good epistemology, on your view?

Eliezer:  I wish I could say that it probably beats showing a single estimate, in terms of its impact on the reader.  But in fact, writing a huge careful Very Serious Report like that and snowing the reader under with Alternative Calculations is probably going to cause them to give more authority to the whole thing.  It's all very well to note the Ways I Could Be Wrong and to confess one's Uncertainty, but you did not actually reach the conclusion, "And that's enough uncertainty and potential error that we should throw out this whole deal and start over," and that's the conclusion you needed to reach.

OpenPhil:  It's not clear to us what better way you think exists of arriving at an estimate, compared to the methodology we used - in which we do consider many possible uncertainties and several ways of generating probability distributions, and try to combine them together into a final estimate.  A Bayesian needs a probability distribution from somewhere, right?

Eliezer:  If somebody had calculated that it currently required an IQ of 200 to destroy the world, that the smartest current humans had an IQ of around 190, and that the world would therefore start to be destroyable in fifteen years according to Moore's Law of Mad Science - then, even assuming Moore's Law of Mad Science to actually hold, the part where they throw in an estimated current IQ of 200 as necessary is complete garbage.  It is not the sort of mistake that can be repaired, either.  No, not even by considering many ways you could be wrong about the IQ required, or considering many alternative different ways of estimating present-day people's IQs.

The correct thing to do with the entire model is chuck it out the window so it doesn't exert an undue influence on your actual thinking, where any influence of that model is an undue one.  And then you just should not expect good advance timing info until the end is in sight, from whatever thought process you adopt instead.

OpenPhil:  What if, uh, somebody knows a Thielian secret, or has... narrowed the rivers of their knowledge to closer to reality's tracks?  We're not sure exactly what's supposed to be allowed, on your worldview; but wasn't there something at the beginning about how, when you're unsure, you should be careful about criticizing people who are more unsure than you?

Eliezer:  Hopefully those people are also able to tell you bold predictions about the nearer-term future, or at least say anything about what the future looks like before the whole world ends.  I mean, you don't want to go around proclaiming that, because you don't know something, nobody else can know it either.  But timing is, in real life, really hard as a prediction task, so, like... I'd expect them to be able to predict a bunch of stuff before the final hours of their prophecy?

OpenPhil:  We're... not sure we see that?  We may have made an estimate, but we didn't make a narrow estimate.  We gave a relatively wide probability distribution as such things go, so it doesn't seem like a great feat of timing that requires us to also be able to predict the near-term future in detail too?

Doesn't your implicit probability distribution have a median?  Why don't you also need to be able to predict all kinds of near-term stuff if you have a probability distribution with a median in it?

Eliezer:  I literally have not tried to force my brain to give me a median year on this - not that this is a defense, because I still have some implicit probability distribution, or, to the extent I don't act like I do, I must be acting incoherently in self-defeating ways.  But still: I feel like you should probably have nearer-term bold predictions if your model is supposedly so solid, so concentrated as a flow of uncertainty, that it's coming up to you and whispering numbers like "2050" even as the median of a broad distribution.  I mean, if you have a model that can actually, like, calculate stuff like that, and is actually bound to the world as a truth.

If you are an aspiring Bayesian, perhaps, you may try to reckon your uncertainty into the form of a probability distribution, even when you face "structural uncertainty" as we sometimes call it.  Or if you know the laws of coherence [LW · GW], you will acknowledge that your planning and your actions are implicitly showing signs of weighing some paths through time more than others, and hence display probability-estimating behavior whether you like to acknowledge that or not.

But if you are a wise aspiring Bayesian, you will admit that whatever probabilities you are using, they are, in a sense, intuitive, and you just don't expect them to be all that good.  Because the timing problem you are facing is a really hard one, and humans are not going to be great at it - not until the end is near, and maybe not even then.

That - not "you didn't consider enough alternative calculations of your target figures" - is what should've been replied to Moravec in 1988, if you could go back and tell him where his reasoning had gone wrong, and how he might have reasoned differently based on what he actually knew at the time.  That reply I now give to you, unchanged.

Humbali:  And I'm back!  Sorry, I had to take a lunch break.  Let me quickly review some of this recent content; though, while I'm doing that, I'll go ahead and give you what I'm pretty sure will be my reaction to it:

Ah, but here is a point that you seem to have not considered at all, namely: what if you're wrong?

Eliezer:  That, Humbali, is a thing that should be said mainly to children, of whatever biological wall-clock age, who've never considered at all the possibility that they might be wrong, and who will genuinely benefit from asking themselves that.  It is not something that should often be said between grownups of whatever age, as I define what it means to be a grownup.  You will mark that I did not at any point say those words to Imaginary Moravec or Imaginary OpenPhil; it is not a good thing for grownups to say to each other, or to think to themselves in Tones of Great Significance (as opposed to as a routine check).

It is very easy to worry that one might be wrong.  Being able to see the direction in which one is probably wrong is rather a more difficult affair.  And even after we see a probable directional error and update our views, the objection, "But what if you're wrong?" will sound just as forceful as before.  For this reason do I say that such a thing should not be said between grownups -

Humbali:  Okay, done reading now!  Hm...  So it seems to me that the possibility that you are wrong, considered in full generality and without adding any other assumptions, should produce a directional shift from your viewpoint towards OpenPhil's viewpoint.

Eliezer (sighing):  And how did you end up being under the impression that this could possibly be a sort of thing that was true?

Humbali:  Well, I get the impression that you have timelines shorter than OpenPhil's timelines.  Is this devastating accusation true?

Eliezer:  I consider naming particular years to be a cognitively harmful sort of activity; I have refrained from trying to translate my brain's native intuitions about this into probabilities, for fear that my verbalized probabilities will be stupider than my intuitions if I try to put weight on them.  What feelings I do have, I worry may be unwise to voice; AGI timelines, in my own experience, are not great for one's mental health, and I worry that other people seem to have weaker immune systems than even my own.  But I suppose I cannot but acknowledge that my outward behavior seems to reveal a distribution whose median seems to fall well before 2050.

Humbali:  Okay, so you're more confident about your AGI beliefs, and OpenPhil is less confident.  Therefore, to the extent that you might be wrong, the world is going to look more like OpenPhil's forecasts of how the future will probably look, like world GDP doubling over four years before the first time it doubles over one year, and so on.

Eliezer:  You're going to have to explain some of the intervening steps in that line of 'reasoning', if it may be termed as such.

Humbali:  I feel surprised that I should have to explain this to somebody who supposedly knows probability theory.  If you put higher probabilities on AGI arriving in the years before 2050, then, on average, you're concentrating more probability into each year that AGI might possibly arrive, than OpenPhil does.  Your probability distribution has lower entropy.  We can literally just calculate out that part, if you don't believe me.  So to the extent that you're wrong, it should shift your probability distributions in the direction of maximum entropy.

Eliezer:  It's things like this that make me worry about whether that extreme cryptivist view would be correct, in which normal modern-day Earth intellectuals are literally not smart enough - in a sense that includes the Cognitive Reflection Test and other things we don't know how to measure yet, not just raw IQ - to be taught more advanced ideas from my own home planet, like Bayes's Rule and the concept of the entropy of a probability distribution.  Maybe it does them net harm by giving them more advanced tools they can use to shoot themselves in the foot, since it causes an explosion in the total possible complexity of the argument paths they can consider and be fooled by, which may now contain words like 'maximum entropy'.

Humbali:  If you're done being vaguely condescending, perhaps you could condescend specifically to refute my argument, which seems to me to be airtight; my math is not wrong and it means what I claim it means.

Eliezer:  The audience is herewith invited to first try refuting Humbali on their own; grandpa is, in actuality and not just as a literary premise, getting older, and was never that physically healthy in the first place.  If the next generation does not learn how to do this work without grandpa hovering over their shoulders and prompting them, grandpa cannot do all the work himself.  There is an infinite supply of slightly different wrong arguments for me to be forced to refute, and that road does not seem, in practice, to have an end.

Humbali:  Or perhaps it's you that needs refuting.

Eliezer, smiling:  That does seem like the sort of thing I'd do, wouldn't it?  Pick out a case where the other party in the dialogue had made a valid point, and then ask my readers to disprove it, in case they weren't paying proper attention?  For indeed in a case like this, one first backs up and asks oneself "Is Humbali right or not?" and not "How can I prove Humbali wrong?"

But now the reader should stop and contemplate that, if they are going to contemplate that at all:

Is Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one's probability distribution over AGI, thereby moving out its median further away in time?

Humbali:  Are you done?

Eliezer:  Hopefully so.  I can't see how else I'd prompt the reader to stop and think and come up with their own answer first.

Humbali:  Then what is the supposed flaw in my argument, if there is one?

Eliezer:  As usual, when people are seeing only their preferred possible use of an argumentative superweapon like 'What if you're wrong?', the flaw can be exposed by showing that the argument Proves Too Much.  If you forecasted AGI with a probability distribution with a median arrival time of 50,000 years from now*, would that be very unconfident?

(*) Based perhaps on an ignorance prior for how long it takes for a sapient species to build AGI after it emerges, where we've observed so far that it must take at least 50,000 years, and our updated estimate says that it probably takes around as much more longer than that.

Humbali:   Of course; the math says so.  Though I think that would be a little too unconfident - we do have some knowledge about how AGI might be created.  So my answer is that, yes, this probability distribution is higher-entropy, but that it reflects too little confidence even for me.

I think you're crazy overconfident, yourself, and in a way that I find personally distasteful to boot, but that doesn't mean I advocate zero confidence.  I try to be less arrogant than you, but my best estimate of what my own eyes will see over the next minute is not a maximum-entropy distribution over visual snow.  AGI happening sometime in the next century, with a median arrival time of maybe 30 years out, strikes me as being about as confident as somebody should reasonably be.

Eliezer:  Oh, really now.  I think if somebody sauntered up to you and said they put 99% probability on AGI not occurring within the next 1,000 years - which is the sort of thing a median distance of 50,000 years tends to imply - I think you would, in fact, accuse them of brash overconfidence about staking 99% probability on that.

Humbali:  Hmmm.  I want to deny that - I have a strong suspicion that you're leading me down a garden path here - but I do have to admit that if somebody walked up to me and declared only a 1% probability that AGI arrives in the next millennium, I would say they were being overconfident and not just too uncertain.

Now that you put it that way, I think I'd say that somebody with a wide probability distribution over AGI arrival spread over the next century, with a median in 30 years, is in realistic terms about as uncertain as anybody could possibly be?  If you spread it out more than that, you'd be declaring that AGI probably wouldn't happen in the next 30 years, which seems overconfident; and if you spread it out less than that, you'd be declaring that AGI probably would happen within the next 30 years, which also seems overconfident.

Eliezer:  Uh huh.  And to the extent that I am myself uncertain about my own brashly arrogant and overconfident views, I should have a view that looks more like your view instead?

Humbali:  Well, yes!  To the extent that you are, yourself, less than totally certain of your own model, you should revert to this most ignorant possible viewpoint as a base rate.

Eliezer:  And if my own viewpoint should happen to regard your probability distribution putting its median on 2050 as just one more guesstimate among many others, with this particular guess based on wrong reasoning that I have justly rejected?

Humbali:  Then you'd be overconfident, obviously.  See, you don't get it, what I'm presenting is not just one candidate way of thinking about the problem, it's the base rate that other people should fall back on to the extent they are not completely confident in their own ways of thinking about the problem, which impose extra assumptions over and above the assumptions that seem natural and obvious to me.  I just can't understand the incredible arrogance you use as to be so utterly certain in your own exact estimate that you don't revert it even a little bit towards mine.

I don't suppose you're going to claim to me that you first constructed an even more confident first-order estimate, and then reverted it towards the natural base rate in order to arrive at a more humble second-order estimate?

Eliezer:  Ha!  No.  Not that base rate, anyways.  I try to shift my AGI timelines a little further out because I've observed that actual Time seems to run slower than my attempts to eyeball it.  I did not shift my timelines out towards 2050 in particular, nor did reading OpenPhil's report on AI timelines influence my first-order or second-order estimate at all, in the slightest; no more than I updated the slightest bit back when I read the estimate of 10^43 ops or 10^46 ops or whatever it was to recapitulate evolutionary history.

Humbali:  Then I can't imagine how you could possibly be so perfectly confident that you're right and everyone else is wrong.  Shouldn't you at least revert your viewpoints some toward what other people think?

Eliezer:  Like, what the person on the street thinks, if we poll them about their expected AGI arrival times?  Though of course I'd have to poll everybody on Earth, not just the special case of developed countries, if I thought that a respect for somebody's personhood implied deference to their opinions.

Humbali:  Good heavens, no!  I mean you should revert towards the opinion, either of myself, or of the set of people I hang out with and who are able to exert a sort of unspoken peer pressure on me; that is the natural reference class to which less confident opinions ought to revert, and any other reference class is special pleading.

And before you jump on me about being arrogant myself, let me say that I definitely regressed my own estimate in the direction of the estimates of the sort of people I hang out with and instinctively regard as fellow tribesmembers of slightly higher status, or "credible" as I like to call them.  Although it happens that those people's opinions were about evenly distributed to both sides of my own - maybe not statistically exactly for the population, I wasn't keeping exact track, but in their availability to my memory, definitely, other people had opinions on both sides of my own - so it didn't move my median much.  But so it sometimes goes!

But these other people's credible opinions definitely hang emphatically to one side of your opinions, so your opinions should regress at least a little in that direction!  Your self-confessed failure to do this at all reveals a ridiculous arrogance.

Eliezer:  Well, I mean, in fact, from my perspective, even my complete-idiot sixteen-year-old self managed to notice that AGI was going to be a big deal, many years before various others had been hit over the head with a large-enough amount of evidence that even they started to notice.  I was walking almost alone back then.  And I still largely see myself as walking alone now, as accords with the Law of Continued Failure:  If I was going to be living in a world of sensible people in this future, I should have been living in a sensible world already in my past.

Since the early days more people have caught up to earlier milestones along my way, enough to start publicly arguing with me about the further steps, but I don't consider them to have caught up; they are moving slower than I am still moving now, as I see it.  My actual work these days seems to consist mainly of trying to persuade allegedly smart people to not fling themselves directly into lava pits.  If at some point I start regarding you as my epistemic peer, I'll let you know.  For now, while I endeavor to be swayable by arguments, your existence alone is not an argument unto me.

If you choose to define that with your word "arrogance", I shall shrug and not bother to dispute it.  Such appellations are beneath My concern.

Humbali:  Fine, you admit you're arrogant - though I don't understand how that's not just admitting you're irrational and wrong -

Eliezer:  They're different words that, in fact, mean different things, in their semantics and not just their surfaces.  I do not usually advise people to contemplate the mere meanings of words, but perhaps you would be well-served to do so in this case.

Humbali:  - but if you're not infinitely arrogant, you should be quantitatively updating at least a little towards other people's positions!

Eliezer:  You do realize that OpenPhil itself hasn't always existed?  That they are not the only "other people" that there are?  An ancient elder like myself, who has seen many seasons turn, might think of many other possible targets toward which he should arguably regress his estimates, if he was going to start deferring to others' opinions this late in his lifespan.

Humbali:  You haven't existed through infinite time either!

Eliezer:  A glance at the history books should confirm that I was not around, yes, and events went accordingly poorly.

Humbali:  So then... why aren't you regressing your opinions at least a little in the direction of OpenPhil's?  I just don't understand this apparently infinite self-confidence.

Eliezer:  The fact that I have credible intervals around my own unspoken median - that I confess I might be wrong in either direction, around my intuitive sense of how long events might take - doesn't count for my being less than infinitely self-confident, on your view?

Humbali:  No.  You're expressing absolute certainty in your underlying epistemology and your entire probability distribution, by not reverting it even a little in the direction of the reasonable people's probability distribution, which is the one that's the obvious base rate and doesn't contain all the special other stuff somebody would have to tack on to get your probability estimate.

Eliezer:  Right then.  Well, that's a wrap, and maybe at some future point I'll talk about the increasingly lost skill of perspective-taking.

OpenPhil:  Excuse us, we have a final question.  You're not claiming that we argue like Humbali, are you?

Eliezer:  Good heavens, no!  That's why "Humbali" is presented as a separate dialogue character and the "OpenPhil" dialogue character says nothing of the sort.  Though I did meet one EA recently who seemed puzzled and even offended about how I wasn't regressing my opinions towards OpenPhil's opinions to whatever extent I wasn't totally confident, which brought this to mind as a meta-level point that needed making.

OpenPhil:  "One EA you met recently" is not something that you should hold against OpenPhil.  We haven't organizationally endorsed arguments like Humbali's, any more than you've ever argued that "we have to take AGI risk seriously even if there's only a tiny chance of it" or similar crazy things that other people hallucinate you arguing.

Eliezer:  I fully agree.  That Humbali sees himself as defending OpenPhil is not to be taken as associating his opinions with those of OpenPhil; just like how people who helpfully try to defend MIRI by saying "Well, but even if there's a tiny chance..." are not thereby making their epistemic sins into mine.

The whole thing with Humbali is a separate long battle that I've been fighting.  OpenPhil seems to have been keeping its communication about AI timelines mostly to the object level, so far as I can tell; and that is a more proper and dignified stance than I've assumed here.


Edit (12/23): Holden replies here [? · GW].

142 comments

Comments sorted by top scores.

comment by CarlShulman · 2021-12-02T17:21:02.996Z · LW(p) · GW(p)

Progress in AI has largely been a function of increasing compute, human software research efforts, and serial time/steps. Throwing more compute at researchers has improved performance both directly and indirectly (e.g. by enabling more experiments, refining evaluation functions in chess, training neural networks, or making algorithms that work best with large compute more attractive).

Historically compute has grown by many orders of magnitude, while human labor applied to AI and supporting software  by only a few. And on plausible decompositions of progress (allowing for adjustment of software to current hardware and vice versa), hardware growth accounts for more of the progress over time than human labor input growth.

So if you're going to use an AI production function for tech forecasting based on inputs (which do relatively OK by the standards tech forecasting), it's best to use all of compute, labor, and time, but it makes sense for compute to have pride of place and take in more modeling effort and attention, since it's the biggest source of change (particularly when including software gains  downstream of hardware technology and expenditures).

Thinking about hardware has a lot of helpful implications for constraining timelines:

  • Evolutionary anchors, combined with paleontological and other information (if you're worried about Rare Earth miracles), mostly cut off extremely high input estimates for AGI development, like Robin Hanson's [AF · GW], and we can say from known human advantages relative to evolution that credence should be suppressed some distance short of that (moreso with more software progress)
  • You should have lower a priori credence in smaller-than-insect brains yielding AGI than more middle of the range compute budgets
  • It lets you see you should concentrate probability mass in the next decade or so because of the rapid scaleup of compute investment  (with a supporting argument from the increased growth of AI R&D effort) covering a substantial share of the orders of magnitude between where we are and levels that we should expect are overkill
  • It gets you likely AGI this century,  and on the closer part of that, with a pretty flat prior over orders of magnitude of inputs that will go into success of magnitude of inputs
  • It suggests lower annual probability later on if Moore's Law and friends are dead, with stagnant inputs to AI

These are all useful things highlighted by Ajeya's model, and by earlier work like Moravec's. In particular, I think Moravec's forecasting methods are looking pretty good, given the difficulty of the problem. He and Kurzweil (like the computing industry generally)  were surprised by the death of Dennard scaling and general price-performance of computing growth slowing, and we're definitely years behind his forecasts in AI capability, but we are seeing a very compute-intensive AI boom in the right region of compute space. Moravec also did anticipate it would take a lot more compute than one lifetime run to get to AGI. He suggested human-level AGI would be in the vicinity of human-like compute quantities being cheap and available for R&D. This old discussion is flawed, but makes me feel the dialogue is straw-manning Moravec to some extent.

Ajeya's model puts most of the modeling work on hardware, but it is intentionally expressive enough to let you represent a lot of different views about software research progress, you just have to contribute more of that yourself when adjusting weights on the different scenarios, or effective software contribution year by year. You can even represent a breakdown of the expectation that software and hardware significantly trade off over time, and very specific accounts of the AI software landscape and development paths. Regardless modeling the most importantly changing input to AGI is useful, and I think this dialogue misleads with respect to that by equivocating between hardware not being the only contributing factor and not being an extremely important to dominant driver of progress.

Replies from: jacob_cannell, vanessa-kosoy, adamShimi
comment by jacob_cannell · 2021-12-02T18:14:04.471Z · LW(p) · GW(p)

I commend this comment and concur with the importance of hardware, the straw-manning of Moravec, etc.

However I do think that EY had a few valid criticisms of Ajeya's model in particular - it ends up smearing probability mass over many anchors or sub-models, most of which are arguably poorly grounded in deep engineering knowledge. And yes you can use it to create your own model, but most people won't do that and are just looking at the default median conclusion.

Moore's Law is petering out as we run up against the constraints of physics for practical irreversible computers, but the brain is also - at best - already at those same limits. So that should substantially reduce uncertainty concerning the hardware side (hardware parity now/soon), and thus place most of the uncertainty around software/algorithm iteration progress. The important algorithmic advances tend to change asymptotic scaling curvature rather than progress linearly, and really all the key uncertainty is over that - which I think is what EY is gesturing at, and rightly so.

comment by Vanessa Kosoy (vanessa-kosoy) · 2021-12-02T18:35:39.503Z · LW(p) · GW(p)

Historically compute has grown by many orders of magnitude, while human labor applied to AI and supporting software by only a few. And on plausible decompositions of progress (allowing for adjustment of software to current hardware and vice versa), hardware growth accounts for more of the progress over time than human labor input growth.

So if you're going to use an AI production function for tech forecasting based on inputs (which do relatively OK by the standards tech forecasting), it's best to use all of compute, labor, and time, but it makes sense for compute to have pride of place and take in more modeling effort and attention, since it's the biggest source of change (particularly when including software gains downstream of hardware technology and expenditures).

I don't understand the logical leap from "human labor applied to AI didn't grow much" to "we can ignore human labor". The amount of labor invested in AI research is related to the the time derivative of progress on the algorithms axis. Labor held constant is not the same as algorithms held constant. So, we are still talking about the problem of predicting when AI-capability(algorithms(t),compute(t)) reaches human level. What do you know about the function "AI-capability" that allows you to ignore its dependence on the 1st argument?

Or maybe you're saying that algorithmic improvements have not been very important in practice? Surely such a claim is not compatible with e.g. the transitions from GOFAI to "shallow" ML to deep ML?

Replies from: CarlShulman
comment by CarlShulman · 2021-12-02T19:36:10.836Z · LW(p) · GW(p)

A perfectly correlated time series of compute and labor would not let us say which had the larger marginal contribution, but we have resources to get at that, which I was referring to with 'plausible decompositions.' This includes experiments with old and new software and hardware, like the chess ones Paul recently commissioned [LW · GW], and studies by AI Impacts, OpenAI, and Neil Thompson. There are AI scaling experiments, and observations of the results of shocks like the end of Dennard scaling, the availability of GPGPU computing, and Besiroglu's data on the relative predictive power of computer and labor in individual papers and subfields.

In different ways those tend to put hardware as driving more log improvement than software (with both contributing), particularly if we consider software innovations downstream of hardware changes.

Replies from: vanessa-kosoy, Charlie Steiner
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-12-02T19:44:01.400Z · LW(p) · GW(p)

I will have to look at these studies in detail in order to understand, but I'm confused how can this pass some obvious tests. For example, do you claim that alpha-beta pruning can match AlphaGo given some not-crazy advantage in compute? Do you claim that SVMs can do SOTA image classification with not-crazy advantage in compute (or with any amount of compute with the same training data)? Can Eliza-style chatbots compete with GPT3 however we scale them up?

Replies from: mark-xu
comment by Mark Xu (mark-xu) · 2021-12-03T20:01:26.607Z · LW(p) · GW(p)

My model is something like:

  • For any given algorithm, e.g. SVMs, AlphaGo, alpha-beta pruning, convnets, etc., there is an "effective compute regime" where dumping more compute makes them better. If you go above this regime, you get steep diminishing marginal returns.
  • In the (relatively small) regimes of old algorithms, new algorithms and old algorithms perform similarly. E.g. with small amounts of compute, using AlphaGo instead of alpha-beta pruning doesn't get you that much better performance than like an OOM of compute (I have no idea if this is true, example is more because it conveys the general gist).
  • One of the main way that modern algorithms are better is that they have much large effective compute regimes. The other main way is enabling more effective conversion of compute to performance.
  • Therefore, one of primary impact of new algorithms is to enable performance to continue scaling with compute the same way it did when you had smaller amounts.

In this model, it makes sense to think of the "contribution" of new algorithms as the factor they enable more efficient conversion of compute to performance and count the increased performance because the new algorithms can absorb more compute as primarily hardware progress. I think the studies that Carl cites above are decent evidence that the multiplicative factor of compute -> performance conversion you get from new algorithms is smaller than the historical growth in compute, so it further makes sense to claim that most progress came from compute, even though the algorithms were what "unlocked" the compute.

For an example of something I consider supports this model, see the LSTM versus transformer graphs in https://arxiv.org/pdf/2001.08361.pdf

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-12-04T12:32:40.385Z · LW(p) · GW(p)

Hmm... Interesting. So, this model says that algorithmic innovation is so fast that it is not much of a bottleneck: we always manage to find the best algorithm for given compute relatively quickly after this compute becomes available. Moreover, there is some smooth relation between compute and performance assuming the best algorithm for this level of compute. [EDIT: The latter part seems really suspicious though, why would this relation persist across very different algorithms?] Or at least this is true is "best algorithm" is interpreted to mean "best algorithm out of some wide class of algorithms s.t. we never or almost never managed to discover any algorithm outside of this class".

This can justify biological anchors as upper bounds[1]: if biology is operating using the best algorithm then we will match its performance when we reach the same level of compute, whereas if biology is operating using a suboptimal algorithm then we will match its performance earlier. However, how do we define the compute used by biology? Moravec's estimate is already in the past and there's still no human-level AI. Then there is the "lifetime" anchor from Cotra's report which predicts a very short timeline. Finally, there is the "evolution" anchor which predicts a relatively long timeline.

However, in Cotra's report most of the weight is assigned to the "neural net" anchors which talk about the compute for training an ANN of brain size using modern algorithms (plus there is the "genome" anchor in which the ANN is genome-sized). This is something that I don't see how to justify using Mark's model. On Mark's model, modern algorithms might very well hit diminishing returns soon, in which case we will switch to different algorithms which might have a completely different compute(parameter count) function.


  1. Assuming evolution also cannot discover algorithms outside our class of discoverable algorithms. ↩︎

Replies from: gwern, mark-xu
comment by gwern · 2021-12-04T15:29:29.120Z · LW(p) · GW(p)

What Moravec says is merely that $1k human-level compute will become available in the '2020s', and offers several different trendline extrapolations: only the most aggressive puts us at cheap human-level compute in 2020/2021 (note the units on his graph are in decades). On the other extrapolations, we don't hit cheap human-compute until the end of the decade. He also doesn't commit to how long it takes to turn compute into powerful systems, it's more of a pre-requisite: only once the compute is available can R&D really start, same way that DL didn't start instantly in 2010 when various levels of compute/$ were hit. Seeds take time to sprout, to use his metaphor.

Replies from: vanessa-kosoy, Eliezer_Yudkowsky
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-12-04T19:56:29.718Z · LW(p) · GW(p)

We already know how much compute we have, so we don't need Moravec's projections for this? If Yudkowsky described Moravec's analysis correctly, then Moravec's threshold was crossed in 2008. Or, by "other extrapolations" you mean other estimates of human brain compute? Cotra's analysis is much more recent and IIUC she puts the "lifetime anchor" (a more conservative approach than Moravec's) at about one order of magnitude above the biggest models currently used.

Now, the seeds take time to sprout, but according to Mark's model this time is quite short. So, it seems like this line of reasoning produces a timeline significantly shorter than the Plattian 30 years.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-12-05T05:30:13.088Z · LW(p) · GW(p)

As much as Moravec-1988 and Moravec-1998 sound like they should be basically the same people, a decade passed between them, and I'd like to note that Moravec may legit have been making an updated version of his wrong argument in 1998 compared to 1988 after he had a chance to watch 10 more years pass and make his earlier prediction look less likely.

Replies from: paulfchristiano
comment by paulfchristiano · 2021-12-07T07:12:42.917Z · LW(p) · GW(p)

I think this is uncharitable and most likely based on a misreading of Moravec. (And generally with gwern on this one.)

As far as I can tell, the source for your attribution of this "prediction" is:

If this rate of improvement were to continue into the next century, the 10 teraops required for a humanlike computer would be available in a $10 million supercomputer before 2010 and in a $1,000 personal computer by 2030."

As far as I could tell it sounds from the surrounding text like his "prediction" for transformative impacts from AI was something like "between 2010 and 2030" with broad error bars.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-12-12T18:41:33.038Z · LW(p) · GW(p)

Adding to what Paul said: jacob_cannell points to this comment [LW(p) · GW(p)] which claims that in Mind Children Moravec predicted human-level AGI in 2028.

Moravec, "Mind Children", page 68: "Human equivalence in 40 years". There he is actually talking about human-level intelligent machines arriving by 2028 - not just the hardware you would theoretically require to build one if you had the ten million dollars to spend on it.

I just went and skimmed Mind Children. He's predicting human-equivalent computational power on a personal computer in 40 years. He seems to say that humans will within 50 years be surpassed in every important way by machines (page 70, below), but I haven't found a more precise or short-term statement yet.

The robot who will work alongside us in half a century will have some interesting properties. Its reasoning abilities should be astonishingly better than a human's—even today's puny systems are much better in some areas. But its perceptual and motor abilities will probably be comparable to ours. Most interestingly, this artificial person will be highly changeable, both as an individual and from one of its generations to the next. But solitary, toiling robots, however competent, are only part of the story. Today, and for some decades into the future, the most effective computing machines work as tools in human hands. As the machinery grows in flexibility and initiative, this association between humans and machines will be more properly described as a partnership. In time, the relationship will become much more intimate, a symbiosis where the boundary between the "natural" and the "artificial" partner is no longer evident. This collaborative route is interesting for its powerful human consequences even if, as I believe, it will matter little in the long run whether or not humans are an intimate part of the evolving artificial intelligences.

Also, unimportant but cool: Check out his musing about the Fermi Paradox:

A frightening explanation is that the universe is prowled by stealthy wolves that prey on fledgling technological races. The only civilizations that survive long would be ones that avoid detection by staying very quiet. But wouldn't the wolves be more technically advanced than their prey and if so what could they gain from their raids? Our autonomous-message idea suggests an odd answer The wolves may be simply helpless bits of data that, in the absence of civilizations, can only lie dormant in multimillion-year trips between galaxies or even inscribed on rocks. Only when a newly evolved, country bumpkin of a technological civilization stumbles and naively acts on one does its eons-old sophistication and ruthlessness, honed over the bodies of countless past victims, become apparent. Then it engineers a reproductive orgy that kills its host and propagates astronomical numbers of copies of itself into the universe, each capable only of waiting patiently for another victim to arise. It is a strategy already familiar to us on a small scale, for it is used by the viruses that plague biological organisms.

While this theory is not nearly as good as the theory I prefer (life is hard, aliens are rare) it strikes me as comparably plausible to the Dark Forest theory. I wonder why I hadn't heard of it before.

Replies from: ESRogs
comment by ESRogs · 2021-12-16T17:46:06.984Z · LW(p) · GW(p)

Those Fermi Paradox musings sound like the plot of A Fire Upon the Deep!

Replies from: jaan
comment by jaan · 2021-12-27T08:04:28.507Z · LW(p) · GW(p)

actually, the premise of david brin’s existence is a close match to moravec’s paragraph (not a coincidence, i bet, given that david hung around similar circles).

comment by Mark Xu (mark-xu) · 2021-12-05T01:37:48.934Z · LW(p) · GW(p)

The way that you would think about NN anchors in my model (caveat that this isn't my whole model):

  • You have some distribution over 2020-FLOPS-equivalent that TAI needs.
  • Algorithmic progress means that 20XX-FLOPS convert to 2020-FLOPS-equivalent at some 1:N ratio.
  • The function from 20XX to the 1:N ratio is relatively predictable, e.g. a "smooth" exponential with respect to time.
  • Therefore, even though current algorithms will hit DMR, the transition to the next algorithm that has less DMR is also predictably going to be some constant ratio better at converting current-FLOPS to 2020-FLOPS-equivalent.

E.g. in (some smallish) parts of my view, you take observations like "AGI will use compute more efficiently than human brains" and can ask questions like "but how much is the efficiency of compute->cognition increasing over time?" and draw that graph and try to extrapolate. Of course, the main trouble is in trying to estimate the original distribution of 2020-FLOPS-equivalent needed for TAI, which might go astray in the way a 1950-watt-equivalent needed for TAI will go astray.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-12-05T09:40:57.132Z · LW(p) · GW(p)

I don't understand this.

  • What is the meaning of "2020-FLOPS-equivalent that TAI needs"? Plausibly you can't build TAI with 2020 algorithms without some truly astronomical amount of FLOPs.
  • What is the meaning of "20XX-FLOPS convert to 2020-FLOPS-equivalent"? If 2020 algorithms hit DMR, you can't match a 20XX algorithm with a 2020 algorithm without some truly astronomical amount of FLOPs.

Maybe you're talking about extrapolating the compute-performance curve, assuming that it stays stable across algorithmic paradigms (although, why would it??) However, in this case, how do you quantify the performance required for TAI? Do we have "real life elo" for modern algorithms that we can compare to human "real life elo"? Even if we did, this is not what Cotra is doing with her "neural anchor".

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-12-05T10:36:16.946Z · LW(p) · GW(p)
What is the meaning of "2020-FLOPS-equivalent that TAI needs"? Plausibly you can't build TAI with 2020 algorithms without some truly astronomical amount of FLOPs.

I think 10^35 would probably be enough. This post [LW · GW] gives some intuition as to why, and also goes into more detail about what 2020-flops-equivalent-that-TAI-needs means. If you want even more detail + rigor, see Ajeya's report. If you think it's very unlikely that 10^35 would be enough, I'd love to hear more about why -- what are the blockers? Why would OmegaStar, SkunkWorks, etc. described in the post (and all the easily-accessible variants thereof) fail to be transformative? (Also, same questions for APS-AI or AI-PONR [LW · GW] instead of TAI, since I don't really care about TAI)

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-12-05T11:07:44.652Z · LW(p) · GW(p)

I didn't ask how much, I asked what does it even mean. I think I understand the principles of Cotra's report. What I don't understand is why should we believe the "neural anchor" when (i) modern algorithms applied to a brain-sized ANN might not produce brain-performance and (ii) the compute cost of future algorithms might behave completely differently. (i.e. I don't understand how Carl's and Mark's arguments in this thread protect the neural anchor from Yudkowsky's criticism.)

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-12-05T11:25:39.149Z · LW(p) · GW(p)

These are three separate things:

(a) What is the meaning of "2020-FLOPS-equivalent that TAI needs?"

(b) Can you build TAI with 2020 algorithms without some truly astronomical amount of FLOPs?

(c) Why should we believe the "neural anchor?"

(a) is answered roughly in my linked post and in much more detail and rigor in Ajeya's doc.

(b) depends on what you mean by truly astronomical; I think it would probably be doable for 10^35, Ajeya thinks 50% chance.

For (c), I actually don't think we should put that much weight on the "neural anchor," and I don't think Ajeya's framework requires that we do (although, it's true, most of her anchors do center on this human-brain-sized ANN scenario which indeed I think we shouldn't put so much weight on.) That said, I think it's a reasonable anchor to use, even if it's not where all of our weight should go. This post [LW · GW] gives some of my intuitions about this. Of course Ajeya's report says a lot more.

comment by Charlie Steiner · 2021-12-03T04:18:08.665Z · LW(p) · GW(p)

The chess link maybe should go to hippke's work [LW · GW]. What you can see there is that a fixed chess algorithm takes an exponentially growing amount of compute and transforms it into logarithmically-growing Elo. Similar behavior features in recent pessimistic predictions of deep learning's future trajectory.

If general navigation of the real world suffers from this same logarithmic-or-worse penalty when translating hardware into performance metrics, then (perhaps surprisingly) we can't conclude that hardware is the dominant driver of progress by noticing that the cost of compute is dropping rapidly.

Replies from: CarlShulman
comment by CarlShulman · 2021-12-03T18:01:53.036Z · LW(p) · GW(p)

But new algorithms also don't work well on old hardware. That's evidence in favor of Paul's view that much software work is adapting to exploit new hardware scales.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2021-12-04T00:50:34.524Z · LW(p) · GW(p)

Which examples are you thinking of? Modern Stockfish outperformed historical chess engines even when using the same resources, until far enough in the past that computers didn't have enough RAM to load it.

I definitely agree with your original-comment points about the general informativeness of hardware, and absolutely software is adapting to fit our current hardware. But this can all be true even if advances in software can make more than 20 orders of magnitude difference in what hardware is needed for AGI, and are much less predictable than advances in hardware rather than being adaptations in lockstep with it.

Replies from: paulfchristiano
comment by paulfchristiano · 2021-12-07T07:26:14.208Z · LW(p) · GW(p)

Here are the graphs from Hippke (he or I should publish summary at some point, sorry).

I wanted to compare Fritz (which won WCCC in 1995) to a modern engine to understand the effects of hardware and software performance. I think the time controls for that tournament are similar to SF STC I think. I wanted to compare to SF8 rather than one of the NNUE engines to isolate out the effect of compute at development time and just look at test-time compute.

So having modern algorithms would have let you win WCCC while spending about 50x less on compute than the winner. Having modern computer hardware would have let you win WCCC spending way more than 1000x less on compute than the winner. Measured this way software progress seems to be several times less important than hardware progress despite much faster scale-up of investment in software.

But instead of asking "how well does hardware/software progress help you get to 1995 performance?" you could ask "how well does hardware/software progress get you to 2015 performance?" and on that metric it looks like software progress is way more important because you basically just can't scale old algorithms up to modern performance.

The relevant measure varies depending on what you are asking. But from the perspective of takeoff speeds, it seems to me like one very salient takeaway is: if one chess project had literally come back in time with 20 years of chess progress, it would have allowed them to spend 50x less on compute than the leader.

ETA: but note that the ratio would be much more extreme for Deep Blue, which is another reasonable analogy you might use.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2021-12-07T08:38:32.373Z · LW(p) · GW(p)

Yeah, the nonlinearity means it's hard to know what question to ask.

If we just eyeball the graph and say that the Elo is log(log(compute)) + time (I'm totally ignoring constants here), and we assume that compute =  so that conveniently , then . The first term is from compute and the second from software. And so our history is totally not scale-free! There's some natural timescale set by , before which chess progress was dominated by compute and after which chess progress will be (was?) dominated by software.

Though maybe I shouldn't spend so much time guessing at the phenomenology of chess, and different problems will have different scaling behavior :P I think this is the case for text models and things like the Winograd schema challenges.

comment by adamShimi · 2021-12-14T11:59:59.854Z · LW(p) · GW(p)

(I'm trying to answer and clarify some of the points in the comments based on my interpretation of Yudkowsky in this post. So take the interpretations with a grain of salt, not as "exactly what Yudkowsky meant")

Progress in AI has largely been a function of increasing compute, human software research efforts, and serial time/steps. Throwing more compute at researchers has improved performance both directly and indirectly (e.g. by enabling more experiments, refining evaluation functions in chess, training neural networks, or making algorithms that work best with large compute more attractive).

Historically compute has grown by many orders of magnitude, while human labor applied to AI and supporting software  by only a few. And on plausible decompositions of progress (allowing for adjustment of software to current hardware and vice versa), hardware growth accounts for more of the progress over time than human labor input growth.

So if you're going to use an AI production function for tech forecasting based on inputs (which do relatively OK by the standards tech forecasting), it's best to use all of compute, labor, and time, but it makes sense for compute to have pride of place and take in more modeling effort and attention, since it's the biggest source of change (particularly when including software gains  downstream of hardware technology and expenditures).

My summary of what you're defending here: because hardware progress is (according to you) the major driver of AI innovation, then we should invest a lot of our forecasting resources into forecasting it, and we should leverage it as the strongest source of evidence available for thinking about AGI timelines.

I feel like this is not in contradiction with what Yudkowsky wrote in this post? I doubt he agrees that just additional compute is the main driver of progress (after all, the Bitter Lesson mostly tells you that insights and innovations leveraging more compute will beat hardcorded ones), but insofar as he expect us to have next to no knowledge of how to build AGI until around 2 years before it is done (and then only for those with the Thelian secret), then compute is indeed the next best thing that we have to estimate timelines.

Yet Yudkowsky's point is that being the next best thing doesn't mean it's any good.

Thinking about hardware has a lot of helpful implications for constraining timelines:

  • Evolutionary anchors, combined with paleontological and other information (if you're worried about Rare Earth miracles), mostly cut off extremely high input estimates for AGI development, like Robin Hanson's [LW · GW], and we can say from known human advantages relative to evolution that credence should be suppressed some distance short of that (moreso with more software progress)

Evolution being an upper bound makes sense, and I think Yudkowsky agrees. But it's an upper bound on the whole human optimization process, and the search space of the human optimization is tricky to think about. I see much of Yudkowsky's criticisms of biological estimates here as saying "this biological anchor doesn't express the cost of evolution's optimization in terms of human optimization, but instead goes for a proxy which doesn't tell you anything".

So if someone captured both evolution and human optimization in the same search space, and found an upper bound on the cost (in terms of optimization power) that evolution spent to find humans, then I expect Yudkowsky would agree that this is an upper bound for the optimization power that human will use. But he might still retort that translating optimization power into compute is not obvious.

  • You should have lower a priori credence in smaller-than-insect brains yielding AGI than more middle of the range compute budgets

Okay, I'm going to propose what I think is the chain of arguments you're using here:

  • Currently, we can train what sounds like the compute equivalent of insect brains, and yet we don't have AGI. Hence we're not currently able to build AGI with "smaller-than-insect brains", which means AGI is less likely to be created with "smaller-than-insect brains".
    • I agree that we don't have AGI
    • The "compute equivalent" stuff is difficult, as I mentioned above, but I don't think this is the main issue here.
    • Going from "we don't know how to do that now" to "we should expect that it is not how we will do it" doesn't really work IMO. As Yudkowsky points out, the requirements for AGI are constantly dropping, and maybe a new insight will turn out to make smaller neural nets far more powerful, before the bigger models reach AGI
  • Evolution created insect-sized brains and they were clearly not AGI, so we have evidence against AGI with that amount of resources.
    • Here the fact that evolution is far worse an optimizer than humans breaks most of the connection between evolution creating insects and humans creating AGI. Evolution merely shows that insects can be made with insect-sized brains, not that AGI cannot be extracted by better use of the same resources.
    • From my perspective this is exactly what Yudkowsky is arguing against in this post: it's not because you know of a bunch of paths through search space that you know what a cleverer optimizer could find. There are ways to use a bunch of paths as data to understand the search space, but you then need either to argue that they are somehow dense in the search space, or that the sort of paths you're interested in look similar to this bunch of paths. And at the moment, I don't see an argument in any of these forms.
  • By default we should expect AGI to have a decent minimal size because of it's complexity, hence smaller models have a lower credence.
    • Agree with the principle (sounds improbable that AGI will be made in 10 lines of LISP), but the threshold is where most of the difficulty lies: how much is too little? A 100 neurons sounds clearly too small, but when you reach insect-sized brains, it's not obvious (at least to me) that better use of resources couldn't bring you most of the way to AGI.
    • (I wonder if there's an availability bias here where the only good models we have nowadays are huge, hence we expect that AGI must be a huge model?)
  • It lets you see you should concentrate probability mass in the next decade or so because of the rapid scaleup of compute investment  (with a supporting argument from the increased growth of AI R&D effort) covering a substantial share of the orders of magnitude between where we are and levels that we should expect are overkill

I think this is where the crux of can the current paradigm just scale matters a lot. The main point Yudkowsky uses in the dialogue to argue against your concentration of probability mass is that he doesn't agree that deep learning scales that way to AGI. In his view (on which I'm not clear yet, and that's not a view that I've seen anyone who actually studies LMs have), the increase in performance will break before. And as such, the concentration of probability mass shouldn't happen, because the fact that you can reach the anchor is irrelevant since we don't know a way to turn compute into AGI (according to Yudkowsky's view).

  • It gets you likely AGI this century,  and on the closer part of that, with a pretty flat prior over orders of magnitude of inputs that will go into success of magnitude of inputs

Here too, it depends on transforming the optimization power of evolution into compute and other requirements, and then know how this compute is supposed to get transformed into efficiency and AGI. (That being said, I think Yudkowsky agrees with the conclusion, just not that specific way of reaching it).

  • It suggests lower annual probability later on if Moore's Law and friends are dead, with stagnant inputs to AI

Not clear to me what you mean here (might be clearer with the right link to the section of Cotra's report about this).  But note that based on Yudkowsky's model in this post, the cost to make AGI should continue to drop as long as the world doesn't end, which creates a weird situation where the probability of AGI keeps increasing with time (Not sure how to turn that into a distribution though...)

These are all useful things highlighted by Ajeya's model, and by earlier work like Moravec's. In particular, I think Moravec's forecasting methods are looking pretty good, given the difficulty of the problem. He and Kurzweil (like the computing industry generally)  were surprised by the death of Dennard scaling and general price-performance of computing growth slowing, and we're definitely years behind his forecasts in AI capability, but we are seeing a very compute-intensive AI boom in the right region of compute space. Moravec also did anticipate it would take a lot more compute than one lifetime run to get to AGI. He suggested human-level AGI would be in the vicinity of human-like compute quantities being cheap and available for R&D. This old discussion is flawed, but makes me feel the dialogue is straw-manning Moravec to some extent.

This is in the same spirit as a bunch of comments on this post, and I feel like it's missing the point of the post? Like, it's not about Moravec's estimate being wildly wrong, it's about the unsoundedness of the methods by which Moravec reaches his conclusion. Your analysis doesn't give such evidence for Moravec predicting accuracy that we should expect he has a really strong method that just looks bad to Yudkowksy but is actually sound. And I feel points like that don't go at all for the cruxes (the soundness of the method), instead they mostly correct a "too harsh judgment" by Yudkowsky, without invalidating his points.

Ajeya's model puts most of the modeling work on hardware, but it is intentionally expressive enough to let you represent a lot of different views about software research progress, you just have to contribute more of that yourself when adjusting weights on the different scenarios, or effective software contribution year by year. You can even represent a breakdown of the expectation that software and hardware significantly trade off over time, and very specific accounts of the AI software landscape and development paths. Regardless modeling the most importantly changing input to AGI is useful, and I think this dialogue misleads with respect to that by equivocating between hardware not being the only contributing factor and not being an extremely important to dominant driver of progress.

Hum, my impression here is that Yudkowsky is actually arguing that he is modeling AGI timelines that way; and if you don't add unwarranted assumptions and don't misuse the analogies to biological anchors, then you get his model, which is completely unable to give the sort of answer Cotra's model is outputting.

Or said differently, I expect that Yudkowsky thinks that if you reason correctly and only use actual evidence instead of unsound lines of reasoning, you get his model; but doing that in the explicit context of biological anchors is like trying to quit sugar in a sweetshop: the whole setting just makes that far harder. And given that he expects that he get the right constraints on models without the biological anchors stuff, then it's completely redundant AND unhelpful.

comment by Aryeh Englander (alenglander) · 2021-12-03T19:09:31.974Z · LW(p) · GW(p)

Meta-comment:

I noticed that I found it very difficult to read through this post, even though I felt the content was important, because of the (deliberately) condescending style. I also noticed that I'm finding it difficult to take the ideas as seriously as I think I should, again due to the style. I did manage to read through it in the end, because I do think it's important, and I think I am mostly able to avoid letting the style influence my judgments. But I find it fascinating to watch my own reaction to the post, and I'm wondering if others have any (constructive) insights on this.

In general I I've noticed that I have a very hard time reading things that are written in a polemical, condescending, insulting, or ridiculing manner. This is particularly true of course if the target is a group / person / idea that I happen to like. But even if it's written by someone on "my side" I find I have a hard time getting myself to read it. There have been several times when I've been told I should really go read a certain book, blog, article, etc., and that it has important content I should know about, but I couldn't get myself to read the whole thing due to the polemical or insulting way in which it was written.

Similarly, as I noted above, I've noticed that I often have a hard time taking ideas as seriously as I probably should if they're written in a polemical / condescending / insulting / ridiculing style. I think maybe I tend to down-weight the credibility of anybody who writes like that, and by extension maybe I subconsciously down-weight the content? Maybe I'm subconsciously associating condescension (at least towards ideas / people I think of as worth taking seriously) with bias? Not sure.

I've heard from other people that they especially like polemical / condescending articles, and I imagine that it is effective / persuasive for a lot of readers. For all I know this is far and away the most effective way of writing this kind of thing. And even if not, Eliezer is perfectly within his rights to use whatever style he wants. Eliezer explicitly acknowledges the condescending-sounding tone of the article, but felt it was worth writing it that way anyway, and that's fine.

So to be clear: This is not at all a criticism of the way this post was written. I am simply curious about my own reaction to it, and I'm interested to hear what others think about that.

A few questions:

  1. Am I unusual in this? Do other people here find it difficult to read polemical or condescending writing, and/or do you find that the style makes it difficult for you to take the content as seriously as you perhaps should?
  2. Are there any studies you're aware of on how people react to polemical writing?
  3. Are there some situations in which it actually does make sense to use the kind of intuitive heuristic I was using - i.e., if it's written in a polemical / insulting style then it's probably less credible? Or is this just a generally bad heuristic that I should try to get rid of entirely?
  4. This is a topic I'm very interested in so I'd appreciate any other related comments or thoughts you might have.
Replies from: Zvi, TurnTrout, Kaj_Sotala, RobbBB, RobbBB, sil-ver, Jotto999, Pattern, charlie-sanders-1
comment by Zvi · 2021-12-04T13:16:28.138Z · LW(p) · GW(p)

Things I instinctively observed slash that my model believes that I got while reading that seem relevant, not attempting to justify them at this time:

  1. There is a core thing that Eliezer is trying to communicate. It's not actually about timeline estimates, that's an output of the thing. Its core message length is short, but all attempts to find short ways of expressing it, so far, have failed.
  2. Mostly so have very long attempts to communicate it and its prerequisites, which to some extent at least includes the Sequences. Partial success in some cases, full success in almost none.
  3. This post, and this whole series of posts, feels like its primary function is training data to use to produce an Inner Eliezer that has access to the core thing, or even better to know the core thing in a fully integrated way. And maybe a lot of Eliezer's other communications is kind of also trying to be similar training data, no matter the superficial domain it is in or how deliberate that is. 
  4. The condescension is important information to help a reader figure out what is producing the outputs, and hiding it would make the task of 'extract the key insights' harder. 
  5. Similarly, the repetition of the same points is also potentially important information that points towards the core message.
  6. That doesn't mean all that isn't super annoying to read and deal with, especially when he's telling you in particular that you're wrong. Cause it's totally that. 
  7. There are those for whom this makes it easier to read, especially given it is very long, and I notice both effects.
  8. My Inner Eliezer says that writing this post without the condescension, or making it shorter, would be much much more effort for Eliezer to write. To the extent such a thing can be written, someone else has to write that version. Also, it's kind of text in several places.
  9. The core message is what matters and the rest mostly doesn't?
  10. I am arrogant enough to think I have a non-zero chance that I know enough of the core thing and have enough skill that with enough work I could perhaps find an improved way to communicate it given the new training data, and I have the urge to try this impossible-level problem if I could find the time and focus (and help) to make a serious attempt. 
Replies from: AprilSR, Pattern, adamShimi, davidad
comment by AprilSR · 2021-12-06T06:23:44.266Z · LW(p) · GW(p)

I would very much like to read your attempt at conveying the core thing - if nothing else, it'll give another angle from which to try to grasp it.

comment by Pattern · 2021-12-04T17:35:11.689Z · LW(p) · GW(p)
Also, it's kind of text in several places. [end of point 8.]

What did you mean by this?

Replies from: Oliver Sourbut
comment by Oliver Sourbut · 2021-12-09T20:03:01.682Z · LW(p) · GW(p)

I also stumbled on this point. I think it parses as

[attempt paraphrasing Zvi]

  1. My Inner Eliezer says, "Writing this post without the condescension, or making it shorter, would be much much more effort for Eliezer to write. To the extent such a thing can be written, someone else has to write that version." Also, besides my Inner Eliezer saying that, the preceding statement is almost explicit in the text in several places.

Evidence for the last bit is things like

[Eliezer's OP]

Your grandpa is feeling kind of tired now and can't debate this again with as much energy as when he was younger.

etc.

comment by adamShimi · 2021-12-06T09:37:13.942Z · LW(p) · GW(p)

I endorse most of this comment; this "core thing" idea is exactly what I tried to understand when writing my recent post [LW · GW] on deep knowledge according to Yudkowsky.

This post, and this whole series of posts, feels like its primary function is training data to use to produce an Inner Eliezer that has access to the core thing, or even better to know the core thing in a fully integrated way. And maybe a lot of Eliezer's other communications is kind of also trying to be similar training data, no matter the superficial domain it is in or how deliberate that is.

Yeah, that sounds right. I feel like Yudkowsky always write mostly training data, and feels like explaining as precisely as he can the thing he's talking about never works. I agree with him that it can't work without the reader doing a bunch of work (what he calls homework), but I expect (from my personal experience) that doing the work while you have an outline of the thing is significantly easier. It's easier to trust that there's something valuable at the end of the tunnel when you have a half-decent description.

The condescension is important information to help a reader figure out what is producing the outputs, and hiding it would make the task of 'extract the key insights' harder.

Here though I feel like you're overinterpreting. In older writing, Yudkowsky is actually quite careful to not directly insult people and be condescending. I'm not saying he never does it, but he tones it down a lot compared to what's happening in this recent dialogue. I think that a better explanation is simply that he's desperate, and has very little hope of being able to convey what he means because he's being doing that for 13 years and no one catched on.

Maybe point 8 is also part of the explanation: doing this non-condescendingly sounds like far more work for him, and yet he does't expect it to work, so he doesn't take that extra charge for little expected reward. 

My Inner Eliezer says that writing this post without the condescension, or making it shorter, would be much much more effort for Eliezer to write. To the extent such a thing can be written, someone else has to write that version. Also, it's kind of text in several places.

comment by davidad · 2021-12-10T08:57:53.284Z · LW(p) · GW(p)
comment by TurnTrout · 2021-12-03T21:30:49.976Z · LW(p) · GW(p)

I find it concerning that you felt the need to write "This is not at all a criticism of the way this post was written. I am simply curious about my own reaction to it" (and still got downvoted?).

For my part, I both believe that this post contains valuable content and good arguments, and that it was annoying / rude / bothersome in certain sections.

comment by Kaj_Sotala · 2021-12-05T11:34:06.542Z · LW(p) · GW(p)

I had a pretty strong negative reaction to it. I got the feeling that the post derives much of its rhetorical force from setting up an intentionally stupid character who can be condescended to, and that this is used to sneak in a conclusion that would seem much weaker without that device.

comment by Rob Bensinger (RobbBB) · 2021-12-06T03:17:16.470Z · LW(p) · GW(p)

When I try to mentally simulate negative reader-reactions to the dialogue, I usually get a complicated feeling that's some combination of:

  • Some amount of conflict aversion: Harsh language feels conflict-y, which is inherently unpleasant.
  • Empathy for, or identification with, the people or views Eliezer was criticizing. It feels bad to be criticized, and it feels doubly bad to be told 'you are making basic mistakes'.
  • Something status-regulation-y: My reader-model here finds the implied threat to the status hierarchy salient (whether or not Eliezer is just trying to honestly state his beliefs), and has some version of an 'anti-cheater [LW · GW]' or 'anti-rising-above-your-station' impulse.

How right/wrong do you think this is, as a model of what makes the dialogue harder or less pleasant to read from your perspective?

(I feel a little wary of stating my model above, since (a) maybe it's totally off, and (b) it can be rude to guess at other people's mental states. But so far this conversation has felt very abstract to me, so maybe this can at least serve as a prompt to go more concrete. E.g., 'I find it hard to read condescending things' is very vague about which parts of the dialogue we're talking about, about what makes them feel condescending, and about how the feeling-of-condescension affects the sentence-parsing-and-evaluating experience.)

Replies from: alenglander, matthew-barnett
comment by Aryeh Englander (alenglander) · 2021-12-06T10:06:25.043Z · LW(p) · GW(p)

I think part of what I was reacting to is a kind of half-formed argument that goes something like:

  • My prior credence is very low that all these really smart, carefully thought-through people are making the kinds of stupid or biased mistakes they are being accused of.
  • In fact, my prior for the above is sufficiently low that I suspect it's more likely that the author is the one making the mistake(s) here, at least in the sense of straw-manning his opponents.
  • But if that's the case then I shouldn't trust the other things he says as much, because it looks like he's making reasoning mistakes himself or else he's biased.
  • Therefore I shouldn't take his arguments so seriously.

Again, this isn't actually an argument I would make. It's just me trying to articulate my initial negative reactions to the post.

Replies from: elityre
comment by Eli Tyre (elityre) · 2022-05-12T02:18:36.840Z · LW(p) · GW(p)

Right. And according to Zvi's posit above, a large part of the point of this dialog is that that class of implicit argument is not actually good reasoning (acknowledging that you don't endorse this argument).

More specifically, says my Inner Eliezer, it is less helpful to reason from or about one's priors about really smart, careful-thinking people making or not making mistakes, and much more helpful to think directly about the object-level arguments, and whether they seem true.

Replies from: alenglander
comment by Aryeh Englander (alenglander) · 2022-05-12T17:32:07.836Z · LW(p) · GW(p)

"More specifically, says my Inner Eliezer, it is less helpful to reason from or about one's priors about really smart, careful-thinking people making or not making mistakes, and much more helpful to think directly about the object-level arguments, and whether they seem true."

When you say it's much more helpful, do you mean it's helpful for (a) forming accurate credences about which side is in fact correct, or do you just mean it's helpful for (b) getting a much deeper understanding of the issues? If (b) then I totally agree. If (a) though, why would I expect myself to achieve a more accurate credence about the true state of affairs than any of the people in this argument? If it's because they've stated their arguments for all the world to see so now anybody can go assess those arguments - why should I think I can better assess those arguments than Eliezer and his interlocutors? They clearly still disagree with each other despite reading all the same things I'm reading (and much more, actually). And add to that the fact that Eliezer is essentially saying in these dialogues that he has private reasoning and arguments that he cannot properly express and nobody seems to understand, in which case we have no choice but to do a secondary assessment of how likely he is to have good arguments of that type, or else to form our credences while completely ignoring the possible existence of a very critical argument in one direction.

Sometimes assessments of the argument maker's cognitive abilities and access to relevant knowledge / expertise is in fact the best way to get the most accurate credence you can, even if it's not ideal.

(This is all just repeating standard arguments in favor of modest epistemology, but still.)

comment by Matthew Barnett (matthew-barnett) · 2021-12-06T06:39:02.984Z · LW(p) · GW(p)

I had mixed feelings about the dialogue personally. I enjoy the writing style and think Eliezer is a great writer with a lot of good opinions and arguments, which made it enjoyable.

But at the same time, it felt like he was taking down a strawman. Maybe you’d label it part of “conflict aversion”, but I tend to get a negative reaction to take-downs of straw-people who agree with me.

To give an unfair and exaggerated comparison, it would be a bit like reading a take-down of a straw-rationalist in which the straw-rationalist occasionally insists such things as “we should not be emotional” or “we should always use Bayes’ Theorem in every problem we encounter.” It should hopefully be easy to see why a rationalist might react negatively to reading that sort of dialogue.

comment by Rob Bensinger (RobbBB) · 2021-12-03T19:49:47.017Z · LW(p) · GW(p)

I've gotten one private message expressing more or less the same thing about this post, so I don't think this is a super unusual reaction.

comment by Rafael Harth (sil-ver) · 2021-12-03T19:50:53.643Z · LW(p) · GW(p)

1: To me, it made it more entertaining and thus easier to read. (No idea about non-anecdotal data, would also be interested.)

3: Also no data; I strongly suspect the metric is generally good because... actually I think it's just because the people I find worth listening to are overwhelmingly not condescending. This post seems highly usual in several ways.

comment by Jotto999 · 2022-06-05T14:38:54.352Z · LW(p) · GW(p)

My posting this comment will be contrary to the moderation disclaimer advising not to talk about tone.  But FWIW, I react similarly and I skip reading things written in this way, interpreting them as manipulating me into believing the writer is hypercompetent.

comment by Pattern · 2021-12-04T17:31:27.548Z · LW(p) · GW(p)
Meta-comment:

It's not just a meta issue. The way it's written has a big impact on how to engage with it.


In general I I've noticed that I have a very hard time reading things that are written in a polemical, condescending, insulting, or ridiculing manner. This is particularly true of course if the target is a group / person / idea that I happen to like. But even if it's written by someone on "my side" I find I have a hard time getting myself to read it. There have been several times when I've been told I should really go read a certain book, blog, article, etc., and that it has important content I should know about, but I couldn't get myself to read the whole thing due to the polemical or insulting way in which it was written.

I dealt with this by reading it and trying to be critical. The comment this produced was (predictably) downvoted.

comment by Charlie Sanders (charlie-sanders-1) · 2021-12-09T21:08:37.081Z · LW(p) · GW(p)
  1. The size of the community working on the alignment problem can be assumed to be at least somewhat proportional to the likelihood of successfully solving the alignment problem.
  2. Eliezer, being the most public face of the alignment problem community, wields outsized influence in shaping public perception of the community.
  3. Eliezer's writing is distinctly condescending and polemical, and has at least a hypothetical possibility of causing reputational harm to the community (as evidenced by your comment).

Based on this, there absolutely exists a hypothetical point where, based purely on writing style, the net effect of a post like this could fully undermine the post's ostensible aim. Whether this post crosses that point is a subjective evaluation, and I don't know of any rigorous way to evaluate this.

I'm fully aware that this could be construed as "tone policing", but ignorance of the impacts of writing tone seems like a blind spot to Eliezer and the community overall, so I think the topic is worthy of discussion.

Replies from: RobbBB
comment by Grant Demaree (grant-demaree) · 2021-12-02T04:38:05.832Z · LW(p) · GW(p)

Short summary: Biological anchors are a bad way to predict AGI. It’s a case of “argument from comparable resource consumption.” Analogy: human brains use 20 Watts. Therefore, when we have computers with 20 Watts, we’ll have AGI! The 2020 OpenPhil estimate of 2050 is based on a biological anchor, so we should ignore it.

Longer summary:

Lots of folks made bad AGI predictions by asking: 

  1. How much compute is needed for AGI?
  2. When that compute will be available?

To find (1), they use a “biological anchor,” like the computing power of the human brain, or the total compute used to evolve human brains.

Hans Moravec, 1988: the human brain uses 10^13 ops/s, and computers with this power will be available in 2010. 

Eliezer objects that:

  1. “We’ll have computers as fast as human brains in 2010” doesn’t imply “we’ll have strong AI in 2010.”
  2. The compute needed depends on how well we understand cognition and computer science. It might be done with a hypercomputer but very little knowledge, or a modest computer but lots of knowledge.
  3. An AGI wouldn’t actually need 10^13 ops/s, because human brains are inefficient. One example, they do lots of operations in parallel, which could be replaced with fewer operations in series.

Eliezer, 1999: Eliezer mentions that he too made bad AGI predictions as a teenager

Ray Kurzweil, 2001: Same idea as Moravec, but 10^16 ops/s. Not worth repeating

Someone, 2006: it took ~10^43 ops for evolution to create human brains. It’ll be a very long time before a computer can reach 10^43 ops, so AGI is very far away

Eliezer objects that the use of a biological anchor is sufficient to make this estimate useless. It’s a case of a more general “argument from comparable resource consumption.”

Analogy: human brains use 20 Watts. Therefore, when we have computers with 20 Watts, we’ll have AGI!

OpenPhil, 2020: A much more sophisticated estimate, but still based on a biological anchor. They predict AGI in 2050.

How the new model works:

Demand side: Estimate how many neural-network parameters would emulate a brain. Use this to find the computational cost of training such a model. (I think this part mischaracterizes OpenPhil's work, my comments at the bottom)

Supply side: Moore’s law, assuming 

  1. Willingness to spend on AGI training is a fixed percent of GDP
  2. “Computation required to accomplish a fixed task decreases by half every 2-3 years due to better algorithms.”

Eliezer’s objections:

  1. (Surprise!) It’s still founded on a biological anchor, which is sufficient to make it invalid
     
  2. OpenPhil models theoretical AI progress as algorithms getting twice as efficient every 2-3 years. This is a bad model, because folks keep finding entirely new approaches. Specifically, it implies “we should be able to replicate any modern feat of deep learning performed in 2021, using techniques from before deep learning and around fifty times as much computing power."
     
  3. Some of OpenPhil’s parameters make it easy for the modelers to cheat, and make sure it comes up with an answer they like:
    “I was wondering what sort of tunable underdetermined parameters enabled your model to nail the psychologically overdetermined final figure of '30 years' so exactly.”

Can’t we use this as an upper bound? Maybe AGI will come sooner, but surely it won’t take longer than this estimate.

Eliezer thinks this is the same non-sequitur as Moravec’s. If you train a model big enough to emulate a brain, that doesn’t mean AGI will pop out at the end.

Other commentary: Eliezer mentions several times that he’s feeling old, tired, and unhealthy. He feels frustrated that researchers today repeat decades-old bad arguments. It takes him a lot of energy to rebut these claims

My thoughts:

I found this persuasive, but I also think it mischaracterized the OpenPhil model

My understanding is that OpenPhil didn’t just estimate the number of neural network parameters required to train a human brain. They used six different biological anchors, including the “evolution anchor’, which I find very useful for an upper bound.

Holden Karnofsky, who seems to put much more stock in the Bio Anchors model than Eliezer, explains the model really well here. But I was frustrated to see that the write-up on Holden’s blog gives 50% by 2090 (first graph) using the evolution anchor, while the same graph in the old calcs gives only 11%. Was this model tuned after seeing the results?

My conclusion: Bio Anchors is a terrible way to model when AGI will actually arrive. But I don’t agree with Eliezer’s dismissal of using Bio Anchors to get an upper bound, because I think the evolution anchor achieves this.

Replies from: SDM
comment by Sammy Martin (SDM) · 2021-12-02T18:15:35.292Z · LW(p) · GW(p)

Holden also mentions something a bit like Eliezer's criticism in his own write-up,

In particular, I think it's hard to rule out the possibility of ingenuity leading to transformative AI in some far more efficient way than the "brute-force" method contemplated here.

When Holden talks about 'ingenuity' methods that seems consistent with Eliezer's 

They're not going to be taking your default-imagined approach algorithmically faster, they're going to be taking an algorithmically different approach that eats computing power in a different way than you imagine it being consumed.

I.e. if you wanted to fold this consideration into OpenAI's estimate you'd have to do it by having a giant incredibly uncertain free-floating variable for 'speedup factor' because you'd be nonsensically trying to estimate the 'speed-up' to brain processing applied from using some completely non-Deep Learning or non-brainlike algorithm for intelligence. All your uncertainty just gets moved into that one factor, and you're back where you started.

 

It's possible that Eliezer is confident in this objection partly because of his 'core of generality' model of intelligence [LW(p) · GW(p)]- i.e. he's implicitly imagining enormous numbers of varied paths to improvement that end up practically in the same place, while 'stack more layers in a brainlike DL model' is just one of those paths (and one that probably won't even work), so he naturally thinks estimating the difficulty of this one path we definitely won't take (and which probably wouldn't work even if we did try it) out of the huge numbers of varied paths to generality is useless.

However, if you don't have this model [LW(p) · GW(p)], then perhaps you can be more confident that what we're likely to build will look at least somewhat like a compute-limited DL system and that these other paths will have to share some properties of this path. Relatedly, it's an implication of the model that there's some imaginable (and not e.g. galaxy sized) model we could build right now that would be an AGI, which I think Eliezer disputes?

comment by davidad · 2021-12-10T09:11:19.285Z · LW(p) · GW(p)

Heartened by a strong-upvote for my attempt at condensing Eliezer's object-level claim about timeline estimates [LW(p) · GW(p)], I shall now attempt condensing Eliezer's meta-level "core thing" [LW(p) · GW(p)].

  1. Certain epistemic approaches to arrive at object-level knowledge consistently look like a good source of grounding in reality, especially to people who are trying to be careful about epistemics, and yet such approaches' grounding in reality is consistently illusory.
  2. Specific examples mentioned in the post are "the outside view", "reference class forecasting", "maximum entropy", "the median of what I remember credible people saying", and, most importantly for the object-level but least importantly for the Core Thing, "Drake-equation-style approaches that cleanly represent the unknown of interest as a deterministic function of simpler-seeming variables".
  3. Specific non-examples are concrete experimental observations (falling objects, celestial motion). These have grounding in reality, but they don't tend to feel like they're "objective" in the same way, like a nexus that my beliefs are epistemically obligated to move toward—they just feel like a part of my map that isn't confused. (If experiments do start to feel "objective", one is then liable to mistake empirical frequencies for probabilities.)
  4. The illusion of grounding in reality doesn't have to be absolute (i.e. "this method reliably arrives at optimal beliefs") to be absolutely corrupting (e.g. "this prior, which is not relative to anything in particular, is uniquely privileged and central, even if the optimal beliefs are somewhere nearby-but-different").
  5. The "trick that never works", in general form, is to go looking in epistemology-space for some grounding in objective reality, which will systematically tend to lead you into these illusory traps.
  6. Instead of trying to repress your subjective ignorance by shackling it to something objective, you should:
    1. sublimate your subjective ignorance into quantitative probability measures,
    2. use those to make predictions and design experiments, and finally
    3. freely and openly absorb observations into your subjective mind and make subjective updates.

Eliezer doesn't seem to be saying the following [edit: at least not in my reading of this specific post], but I would like to add:

  1. Even just trying to make your updates objective (e.g. by using a computer to perform an exact Bayesian update) tends to go subtly wrong, because it can encourage you to replace your actual map with your map of your map, which is predictably less informative. Making a map of your map is another one of those techniques that seem to provide more grounding but do not actually.
  2. Calibration training is useful because your actual map is also predictably systematically bad at updating by default, and calibration training makes it better at doing this. Teaching the low-description-length principles of probability to your actual map-updating system is much more feasible (or at least more cost-effective) than emitting your actual map into a computationally realizable statistical model.
  3. Techniques that give the illusion of objectivity are usually not useless. But to use them effectively, you have to see through the illusion of objectivity, and treat their outputs as observations of what those techniques output, rather than as glimpses at the light of objective reasonableness.
    • In the particular example of forecasting AGI with biological anchors, Eliezer does this when he predicts (correctly, at least in the fictional dialogue) that the technique (perhaps especially when operated by people who are trying to be careful and objective) will output a median 30 years from the present.
      • It's only because Eliezer can predict the outcome, and finds it almost uncorrelated in his map from AGI's actual arrival, that he dismisses the estimate as useless.
      • This particular example as a vehicle for the Core Thing (if I'm right about what that is) has the advantage that the illusion of objectivity is especially illusory (at least from Eliezer's perspective), but the disadvantage that one can almost read Eliezer as condemning ever using Drake-equation-style approaches, or reference-class forecasting, or the principle of maximum entropy. But I think the general point is about how to undistortedly view the role of these kinds of things in one's epistemic journey, which in most cases doesn't actually exclude using them.
Replies from: RobbBB, adamShimi
comment by Rob Bensinger (RobbBB) · 2021-12-10T11:41:09.046Z · LW(p) · GW(p)

Making a map of your map is another one of those techniques that seem to provide more grounding but do not actually.

Sounds to me like one of the things Eliezer is pointing at in Hero Licensing [LW · GW]:

Look, thinking things like that is just not how the inside of my head is organized. There’s just the book I have in my head and the question of whether I can translate that image into reality. My mental world is about the book, not about me.

You do want to train your brain, and you want to understand your strengths and weaknesses. But dwelling on your biases at the expense of the object level isn't actually usually the best way to give your brain training data and tweak its performance.

I think there's a lesson here that, e.g., Scott Alexander hadn't fully internalized as of his 2017 Inadequate Equilibria review. There's a temptation to "go meta" and find some cleaner, more principled, more objective-sounding algorithm to follow than just "learn lots and lots of object-level facts so you can keep refining your model, learn some facts about your brain too so you can know how much to trust it in different domains, and just keep doing that".

But in fact there's no a priori reason to expect there to be a shortcut that lets you skip the messy unprincipled your-own-perspective-privileging Bayesian Updating thing. Going meta is just a tool in the toolbox, and it's risky to privilege it on 'sounds more objective/principled' grounds when there's neither a theoretical argument nor an empirical-track-record argument for expecting that approach to actually work.

Teaching the low-description-length principles of probability to your actual map-updating system is much more feasible (or at least more cost-effective) than emitting your actual map into a computationally realizable statistical model.

I think this is a good distillation of Eliezer's view (though I know you're just espousing your own view here). And of mine, for that matter. Quoting Hero Licensing [LW · GW] again:

STRANGER:  I believe the technical term for the methodology is “pulling numbers out of your ass.” It’s important to practice calibrating your ass numbers on cases where you’ll learn the correct answer shortly afterward. It’s also important that you learn the limits of ass numbers, and don’t make unrealistic demands on them by assigning multiple ass numbers to complicated conditional events.

ELIEZER:  I’d say I reached the estimate… by thinking about the object-level problem? By using my domain knowledge? By having already thought a lot about the problem so as to load many relevant aspects into my mind, then consulting my mind’s native-format probability judgment—with some prior practice at betting having already taught me a little about how to translate those native representations of uncertainty into 9:1 betting odds.

One framing I use is that there are two basic perspectives on rationality:

  • Prosthesis: Human brains are naturally bad at rationality, so we can identify external tools (and cognitive tech that's too simple and straightforward for us to misuse) and try to offload as much of our reasoning as possible onto those tools, so as to not have to put weight down (beyond the bare minimum necessary) on our own fallible judgment.
  • Strength training: There's a sense in which every human has a small AGI (or a bunch of AGIs) inside their brain. If we didn't have access to such capabilities, we wouldn't be able to do complicated 'planning and steering of the world into future states' at all.

    It's true that humans often behave 'irrationally', in the sense that we output actions based on simpler algorithms (e.g., reinforced habits and reflex behavior) that aren't doing the world-modeling or future-steering thing. But if we want to do better, we mostly shouldn't be leaning on weak reasoning tools like pocket calculators; we should be focusing our efforts on more reliably using (and providing better training data) the AGI inside our brains. Nearly all of the action (especially in hard foresight-demanding domains like AI alignment) is in improving your inner AGI's judgment, intuitions, etc., not in outsourcing to things that are way less smart than an AGI.

In practice, of course, you should do some combination of the two. But I think a lot of the disagreements MIRI folks have with other people in the existential risk ecosystem are related to us falling on different parts of the prosthesis-to-strength-training spectrum.

Techniques that give the illusion of objectivity are usually not useless. But to use them effectively, you have to see through the illusion of objectivity, and treat their outputs as observations of what those techniques output, rather than as glimpses at the light of objective reasonableness.

Strong agreement. I think this is very well-put.

Replies from: elityre
comment by Eli Tyre (elityre) · 2022-05-12T02:24:02.018Z · LW(p) · GW(p)

This is good enough that it should be a top level post. 

The prosthesis vs. strength training dichotomy in particular seems extremely important.

comment by adamShimi · 2021-12-14T13:58:05.745Z · LW(p) · GW(p)

(My comment is quite critical, but I want to make it clear that I think doing this exercise is great and important, despite my disagreement with the result of the exercise ;) )

So, for having done the same exercise, I feel that you go far too meta here. And that by doing so, you're losing most of the actual valuable meta insights of the post. I'm not necessarily saying that your interpretation doesn't fit what Yudkowsky says, but if the goal is to distill where Yudkowsky is coming from in this specific post, I feel like this comment fails.

The "trick that never works", in general form, is to go looking in epistemology-space for some grounding in objective reality, which will systematically tend to lead you into these illusory traps.

AFAIU, Yudkowsky is not at all arguing against searching for grounding in reality, he's arguing for a very specific grounding in reality that I've been calling deep knowledge in my post interpreting him on the topic. [AF · GW] He's arguing that there are ways to go beyond the agnosticism of Science (which is very similar to the agnosticism of the outside view and reference class forecasting) between hypotheses that haven't been falsified yet, and let you move towards the true answer despite the search space being far too large to tractably explore. (See that section [AF · GW] in particular of my post, where I go into a lot of details about Yudkowsky's writing on that in the Sequences).

I also feel like your interpretation conflates the errors that Humbali makes and the ones Simulated-OpenPhil makes, but they're different in my understanding:

  • Humbali keeps on criticising Yudkowsky's confidence, and is the representative of the bad uses of the outside view and reference class forecasting. Which is why a lot of the answers to Humbali focus on deep knowledge (which Yudkowsky refer to here with the extended metaphor of the rails), where it comes from, and why it lets you discard some hypotheses (which is the whole point)
  • Simulated-OpenPhil mostly defend their own approach and the fact that you can use biological anchors to reason about timelines if you do it carefully. The answer Yudkowsky gives is IMO that they don't have/give a way of linking the path of evolution in search space and the path of human research in search space, and as such more work and more uncertainty handling on evolution and the other biological anchors don't give you more information about AGI timelines. The only thing you get out of evolution and biological anchors is the few bits that Yudkowsky already integrates in his model (like that humans will need less optimization power because they're smarter than evolution), which are not enough to predict timelines.

If I had to state it (and I will probably go into more detail in the post I'm currently writing), my interpretation is that the trick that never works is "using a biological analogy that isn't closely connected to how human research is optimizing for AGI". So the way of making a "perpetual motion machine" would be to explain why the specific path of evolution (or other anchors) is related to the path of human optimization, and derive stuff from this.

comment by Richard_Ngo (ricraz) · 2021-12-02T20:29:06.265Z · LW(p) · GW(p)

The two extracts from this post that I found most interesting/helpful:

The problem is that the resource gets consumed differently, so base-rate arguments from resource consumption end up utterly unhelpful in real life.  The human brain consumes around 20 watts of power.  Can we thereby conclude that an AGI should consume around 20 watts of power, and that, when technology advances to the point of being able to supply around 20 watts of power to computers, we'll get AGI?

I'm saying that Moravec's "argument from comparable resource consumption" must be in general invalid [LW · GW], because it Proves Too Much [LW · GW].  If it's in general valid to reason about comparable resource consumption, then it should be equally valid to reason from energy consumed as from computation consumed, and pick energy consumption instead to call the basis of your median estimate.

You say that AIs consume energy in a very different way from brains?  Well, they'll also consume computations in a very different way from brains!  The only difference between these two cases is that you know something about how humans eat food and break it down in their stomachs and convert it into ATP that gets consumed by neurons to pump ions back out of dendrites and axons, while computer chips consume electricity whose flow gets interrupted by transistors to transmit information.  Since you know anything whatsoever about how AGIs and humans consume energy, you can see that the consumption is so vastly different as to obviate all comparisons entirely.

You are ignorant of how the brain consumes computation, you are ignorant of how the first AGIs built would consume computation, but "an unknown key does not open an unknown lock" and these two ignorant distributions should not assert much internal correlation between them.

Even without knowing the specifics of how brains and future AGIs consume computing operations, you ought to be able to reason abstractly about a directional update that you would make, if you knew any specifics instead of none.  If you did know how both kinds of entity consumed computations, if you knew about specific machinery for human brains, and specific machinery for AGIs, you'd then be able to see the enormous vast specific differences between them, and go, "Wow, what a futile resource-consumption comparison to try to use for forecasting."

and

You can think of there as being two biological estimates to anchor on, not just one.  You can imagine there being a balance that shifts over time from "the computational cost for evolutionary biology to invent brains" to "the computational cost to run one biological brain".

In 1960, maybe, they knew so little about how brains worked that, if you gave them a hypercomputer, the cheapest way they could quickly get AGI out of the hypercomputer using just their current knowledge, would be to run a massive evolutionary tournament over computer programs until they found smart ones, using 10^43 operations.

Today, you know about gradient descent, which finds programs more efficiently than genetic hill-climbing does; so the balance of how much hypercomputation you'd need to use to get general intelligence using just your own personal knowledge, has shifted ten orders of magnitude away from the computational cost of evolutionary history and towards the lower bound of the computation used by one brain.  In the future, this balance will predictably swing even further towards Moravec's biological anchor, further away from Somebody on the Internet's biological anchor.


 

comment by davidad · 2021-12-09T18:25:55.350Z · LW(p) · GW(p)

I wrote a lengthy exegesis of Humbali's confusion around “maximum entropy” [AF(p) · GW(p)], which I decided ended up somewhere between a comment and a post in terms of quality, so I put it here in "Shortform" [AF(p) · GW(p)]. I'm new to contributing content on AF, so meta-level feedback about how best to use the different channels (commenting, shortform, posting) is welcome.

comment by paulfchristiano · 2021-12-07T05:47:00.397Z · LW(p) · GW(p)

This is a long post that I have only skimmed.

It seems like the ante, in support of any claim like "X forecasting method doesn't work [historically]," is to compare it to some other forecasting method on offer---whatever is being claimed as the default to be used in X's absence. (edited to add "historically")

It looks to me like historical forecasts that look like biological anchors have fared relatively well compared to the alternatives, but I could easily be moved by someone giving some evidence about what kind of methodology would have worked well or poorly, or what kinds of forecasts were actually being made at the time.

The methodological points in this post may be sound even if it's totally ungrounded in track record, but at that point the title is mostly misleading clickbait.

(My main complaint in this comment is "what's the alternative you are comparing to?" That said, the particular claims about Moravec's views also look fairly uncharitable to me, as Carl has pointed out and I've raised to Eliezer.  It does not seem to me from reading Moravec's writing in 1988 like he expects TAI in 2010 rather than 2030. Maybe Eliezer is better at understanding what people are saying. But he often describes the views of people in 2020, who I can talk to and confirm, and does not in fact do a good job.)

Replies from: habryka4
comment by habryka (habryka4) · 2021-12-07T06:27:09.725Z · LW(p) · GW(p)

The post feels like it's trying pretty hard to point towards an alternative forecasting method, though I also agree it's not fully succeeding at getting there. 

I feel like de-facto the forecasting methodology of people who are actually good at forecasting don't usually strike me as low-inferential distance, such that it is obvious how to communicate the full methodology. My sense from talking to a number of superforecasters over the years is that they do pretty complicated things, and I don't feel like the critique of "A critique is only really valid if it provides a whole countermethodology" is a very productive way of engaging with their takes. I feel like I've had lots of conversations of the type "X methodology doesn't work" without someone being able to explain all the details of what they do instead, that were still valuable and helped me model the world, and said meaningful things. Usually the best they can do is something like "well, instead pay attention and build a model of these different factors", which feels like a bar Eliezer is definitely clearing in this post.

Replies from: paulfchristiano
comment by paulfchristiano · 2021-12-07T06:54:12.475Z · LW(p) · GW(p)

I think it's fine to say that you think something else is better without being able to precisely say what it is. I just think "the trick that never works" is an overstatement if you aren't providing evidence about whether it has  worked, and that it's hard to provide such evidence without saying something about what you are comparing to.

(Like I said though, I just skimmed the post and it's possible it contains evidence or argument that I didn't catch.)

It's possible the action is in disagreements about Moravec's view rather than the lack of an alternative, but it's hard to say because it's unclear how good Eliezer thinks the alternative is. I think that looking at Moravec's view in hindsight it doesn't seem crazy at all, and e.g. I think it's more reasonable than Eliezer's views from 10 years later.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-12-02T18:52:58.054Z · LW(p) · GW(p)

I think this does not do justice to Ajeya's bio anchors model. Carl already said the important bits, but here are some more points:

But, if you insist on the error of anchoring on biology, you could perhaps do better by seeing a spectrum between two bad anchors.  This lets you notice a changing reality, at all, which is why I regard it as a helpful thing to say to you and not a pure persuasive superweapon of unsound argument.  Instead of just fixating on one bad anchor, the hybrid of biological anchoring with whatever knowledge you currently have about optimization, you can notice how reality seems to be shifting between two biological bad anchors over time, and so have an eye on the changing reality at all.  Your new estimate in terms of gradient descent is stepping away from evolutionary computation and toward the individual-brain estimate by ten orders of magnitude, using the fact that you now know a little more about optimization than natural selection knew; and now that you can see the change in reality over time, in terms of the two anchors, you can wonder if there are more shifts ahead.

This is exactly what the bio anchors framework is already doing? It has the lifetime anchor on one end, and the evolution anchor on the other end, and almost all probability mass is in between, and then it has a parameter for how that mass shifts leftwards over time as new ideas come along. I do agree that the halving-of-compute-costs-every-2.5-years estimate seems too slow to me; it seems like that's the rate of "normal incremental progress" but that when you account for the sort of really important ideas (or accumulations of ideas, or shifts in research direction towards more fruitful paths) that happen about once a decade, the rate should be faster than that. I think this because when I imagine what the field of AI looks like in 2040, I have a hard time believing it looks anything like the sort of paradigm the medium-horizon or long-horizon anchors are built around, with big neural nets trained by gradient descent etc. I think that instead something significantly better/more capable/more efficient will have been found by then. (And I think, unfortunately, that we don't really have much room for further improvement before we get to AGI! If, say, current methods have a 50% chance of working, then significantly-better-than-current-methods should bring our credence up to well over 50%.)

Realistically, though, I would not recommend eyeballing how much more knowledge you'd think you'd need to get even larger shifts, as some function of time, before that line crosses the hardware line.  Some researchers may already know Thielian secrets you do not, that take those researchers further toward the individual-brain computational cost (if you insist on seeing it that way).  That's the direction that economics rewards innovators for moving in, and you don't know everything the innovators know in their labs.
When big inventions finally hit the world as newspaper headlines, the people two years before that happens are often declaring it to be fifty years away; and others, of course, are declaring it to be two years away, fifty years before headlines.  Timing things is quite hard even when you think you are being clever; and cleverly having two biological anchors and eyeballing Reality's movement between them, is not the sort of cleverness that gives you good timing information in real life.
In real life, Reality goes off and does something else instead, and the Future does not look in that much detail like the futurists predicted.  In real life, we come back again to the same wiser-but-sadder conclusion given at the start, that in fact the Future is quite hard to foresee - especially when you are not on literally the world's leading edge of technical knowledge about it, but really even then.  If you don't think you know any Thielian secrets about timing, you should just figure that you need a general policy which doesn't get more than two years of warning, or not even that much if you aren't closely non-dismissively analyzing warning signs.

This seems true but changing the subject. Insofar as the subject is "what should our probability distribution over date-of-AGI-creation look like" then Ajeya's framework (broadly construed) is the right way to think about it IMO. Separately, we should worry that this will never let us predict with confidence that it is happening in X years, and thus we should be trying to have a general policy that lets us react quickly to e.g. two years of warning.

OpenPhil:  I don't understand how some of your reasoning could be internally consistent even on its own terms.  ... You can either say that our forecasted pathway to AGI or something very much like it would probably work in principle without requiring very much more computation than our uncertain model components take into account, meaning that the probability distribution provides a soft upper bound on reasonably-estimable arrival times, but that paradigm shifts will predictably provide an even faster way to do it before then.  That is, you could say that our estimate is both a soft upper bound and also a directional overestimate.  Or, you could say that our ignorance of how to create AI will consume more than one order-of-magnitude of increased computation cost above biology -
Eliezer:  Indeed, much as your whole proposal would supposedly cost ten trillion times the equivalent computation of the single human brain that earlier biologically-inspired estimates anchored on.
OpenPhil:  - in which case our 2050-centered distribution is not a good soft upper bound, but also not predictably a directional overestimate.  Don't you have to pick one or the other as a critique, there?

I think OpenPhil is totally right here. My own stance is that the 2050-centered distribution is a directional overestimate because e.g. the long-horizon anchor is a soft upper bound (in fact I think the medium-horizon anchor is a soft upper bound too, see Fun with +12 OOMs [LW · GW].)

Replies from: jacob_cannell, adamShimi
comment by jacob_cannell · 2021-12-02T22:55:23.393Z · LW(p) · GW(p)

It has the lifetime anchor on one end, and the evolution anchor on the other end, and almost all probability mass is in between, and then it has a parameter for how that mass shifts leftwards over time as new ideas come along.

 

This reminds me - so what I would really like to see is a distillation of Ajeya's model into a post (with some pretty pictures) that more deeply explores this 'space in between', deeply grounded in deep knowledge of both DL and neuroscience, that tracks the evolutionary history of both, pairing and comparing milestones - starting with say perceptrons and jellyfish (or insert early equivalent) moving all the way up to current SOTA agents vs (insert animals here).

That would allow us to establish a rough technological evolution  / biological evolution speedup curve and project forward - and most likely substantiate the case for short timelines (I predict).

comment by adamShimi · 2021-12-14T14:22:22.650Z · LW(p) · GW(p)

I do agree that the halving-of-compute-costs-every-2.5-years estimate seems too slow to me; it seems like that's the rate of "normal incremental progress" but that when you account for the sort of really important ideas (or accumulations of ideas, or shifts in research direction towards more fruitful paths) that happen about once a decade, the rate should be faster than that.

I don't think this is what Yudkowsky is saying at all in the post. Actually, I think he is saying the exact opposite: that 2.5 years estimate is too fast as an estimate that is supposed to always work. If I understand correctly, his point is that you have significantly less than that most of the time, except during the initial growth after paradigms shifts where you're pushing as much compute as you can on your new paradigm. (That being said, Yudkowsky seems to agree with you that this should make us directionally update towards AGI arriving in less time)

My interpretation seems backed by this quote (and the fact that he's presenting these points as if they're clearly wrong):

Eliezer:  Backtesting this viewpoint on the previous history of computer science, it seems to me to assert that it should be possible to:

  • Train a pre-Transformer RNN/CNN-based model, not using any other techniques invented after 2017, to GPT-2 levels of performance, using only around 2x as much compute as GPT-2;
  • Play pro-level Go using 8-16 times as much computing power as AlphaGo, but only 2006 levels of technology.

[...]

Your model apparently suggests that we have gotten around 50 times more efficient at turning computation into intelligence since that time; so, we should be able to replicate any modern feat of deep learning performed in 2021, using techniques from before deep learning and around fifty times as much computing power.

 

 

This seems true but changing the subject. Insofar as the subject is "what should our probability distribution over date-of-AGI-creation look like" then Ajeya's framework (broadly construed) is the right way to think about it IMO. Separately, we should worry that this will never let us predict with confidence that it is happening in X years, and thus we should be trying to have a general policy that lets us react quickly to e.g. two years of warning.

I don't understand how Yudkowsky can be changing the subject when his subject has never been about "probability distribution over date-of-AGI-creation"? His point IMO is that this is a bad question to ask, not because you wouldn't want the true answer if you could magically get it, but because we don't have and won't have even close to the amount of evidence needed to do this non-trivially until 2 years before AGI (and maybe not even then, because you need to know the Thielian secrets). As such, to reach an answer that fit that type, you must contort the evidence and extract more bits of information that the analogies actually contain, which means that this is a recipe for saying nonsense.

(Note that I'm not arguing Yudkowsky is right, just that I think this is his point, and that your comment is missing it — might be wrong about all of those ^^)

I think OpenPhil is totally right here. My own stance is that the 2050-centered distribution is a directional overestimate because e.g. the long-horizon anchor is a soft upper bound (in fact I think the medium-horizon anchor is a soft upper bound too, see Fun with +12 OOMs [LW · GW].)

Here too this sounds like missing Yudkowsky's point, which is made in the paragraph just after your original quote:

Eliezer:  Mmm... there's some justice to that, now that I've come to write out this part of the dialogue.  Okay, let me revise my earlier stated opinion:  I think that your biological estimate is a trick that never works and, on its own terms, would tell us very little about AGI arrival times at all.  Separately, I think from my own model that your timeline distributions happen to be too long.

My interpretation is that he's saying that:

  • The model, and the whole approach, is a fundamentally bad and misguided way of thinking about these questions, which falls in the many ways he's arguing for before in the dialogue
  • If he stops talking about whether the model is bad, and just looks at its output, then he thinks that's an overestimate compared to the output of his own model.
Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-12-14T16:29:58.354Z · LW(p) · GW(p)

Thanks for this comment (and the other comment below also).

I think we don't really disagree that much here. I may have just poorly communicated, slash maybe I'm objecting to the way Yudkowsky said things because I read it as implying things I disagree with.

I don't think this is what Yudkowsky is saying at all in the post. Actually, I think he is saying the exact opposite: that 2.5 years estimate is too fast as an estimate that is supposed to always work. If I understand correctly, his point is that you have significantly less than that most of the time, except during the initial growth after paradigms shifts where you're pushing as much compute as you can on your new paradigm. (That being said, Yudkowsky seems to agree with you that this should make us directionally update towards AGI arriving in less time)

That's what I think too--normal incremental progress is probably slower than 2.5-year doubling, but there's also occasional breakthrough progress which is much faster, and it all balances out to a faster-than-2.5-year-doubling, but in such a way that makes it really hard to predict, because so much hangs on whether and when breakthroughs happen. I think I just miscommunicated.

Eliezer:  Mmm... there's some justice to that, now that I've come to write out this part of the dialogue.  Okay, let me revise my earlier stated opinion:  I think that your biological estimate is a trick that never works and, on its own terms, would tell us very little about AGI arrival times at all.  Separately, I think from my own model that your timeline distributions happen to be too long.
--The model, and the whole approach, is a fundamentally bad and misguided way of thinking about these questions, which falls in the many ways he's arguing for before in the dialogue.
--If he stops talking about whether the model is bad, and just looks at its output, then he thinks that's an overestimate compared to the output of his own model.

Here I think I share your interpretation of Yudkowsky; I just disagree with Yudkowsky. I agree on the second part; the model overestimates median TAI arrival time. But I disagree on the first part -- I think that having a probability distribution over when to expect TAI / AGI / AI-PONR etc. is pretty important/decision-relevant, e.g. for advising people on whether to go to grad school, or for deciding what sort of research project to undertake. (Perhaps Yudkowsky agrees with this much.) And I think that Ajeya's framework is the best framework I know of for getting that distribution. I think any reasonable distribution should be formed by Ajeya's framework, or some more complicated model that builds off of it (adding more bells and whistles such as e.g. a data-availability constraint or a probability-of-paradigm-shift mechanic.). Insofar as Yudkowsky was arguing against this, and saying that we need to throw out the whole model and start from scratch with a different model, I was not convinced. (Though maybe I need to reread the post and/or your steelman summary)

Replies from: adamShimi
comment by adamShimi · 2021-12-14T17:17:50.679Z · LW(p) · GW(p)

Here I think I share your interpretation of Yudkowsky; I just disagree with Yudkowsky. I agree on the second part; the model overestimates median TAI arrival time. But I disagree on the first part -- I think that having a probability distribution over when to expect TAI / AGI / AI-PONR etc. is pretty important/decision-relevant, e.g. for advising people on whether to go to grad school, or for deciding what sort of research project to undertake. (Perhaps Yudkowsky agrees with this much.) 

Hum, I would say Yudkowsky seems to agree with the value of a probability distribution for timelines.

(Quoting The Weak Inside View (2008) from the AI FOOM Debate)

So to me it seems “obvious” that my view of optimization is only strong enough to produce loose, qualitative conclusions, and that it can only be matched to its retrodiction of history, or wielded to pro-
duce future predictions, on the level of qualitative physics.

“Things should speed up here,” I could maybe say. But not “The doubling time of this exponential should be cut in half.”

I aspire to a deeper understanding of intelligence than this, mind you. But I’m not sure that even perfect Bayesian enlightenment would let me predict quantitatively how long it will take an AI to solve various problems in advance of it solving them. That might just rest on features of an unexplored solution space which I can’t guess in advance, even though I understand the process that searches.

On the other hand, my interpretation of Yudkowsky strongly disagree with the second part of your paragraph:

And I think that Ajeya's framework is the best framework I know of for getting that distribution. I think any reasonable distribution should be formed by Ajeya's framework, or some more complicated model that builds off of it (adding more bells and whistles such as e.g. a data-availability constraint or a probability-of-paradigm-shift mechanic.). Insofar as Yudkowsky was arguing against this, and saying that we need to throw out the whole model and start from scratch with a different model, I was not convinced. (Though maybe I need to reread the post and/or your steelman summary)

So my interpretation of the text is that Yudkowsky says that you need to know how compute will be transformed into AGI to estimate the timelines (then you can plug your estimates for the compute), and that the default of any approach which relies on biological analogies for that part will be sprouting nonsense, because evolution and biology optimize in fundamentally different ways than human researchers do.

For each of the three examples, he goes into more detail about the way this is instantiated. My understanding of his criticism of Ajeya's model is that he disagrees that just current deep learning algorithms are actually a recipe for turning compute into AGI, and so saying "we keep to current deep learning and estimated the required compute" doesn't make sense and doesn't solve the question of how to turn compute into AGI. (Note that his might be the place where you or someone defending Ajeya's model want to disagree with Yudkowsky. I'm just pointing that this is a more productive place to debate him because that might actually make him change his mind — or change your mind if he convinces you)

The more general argument (the reason why "the trick" doesn't work) is that if you actually have a way of transforming compute into AGI, that means you know how to build AGI. And if you do, you're very, very close to the end of the timeline.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-12-14T18:04:02.579Z · LW(p) · GW(p)

I guess I would say: Ajeya's framework/model can incorporate this objection; this isn't a "get rid of the whole framework" objection but rather a "tweak the model in the following way" objection.

Like, I agree that it would be bad if everyone who used Ajeya's model had to put 100% of their probability mass into the six bio anchors she chose. That's super misleading/biasing/ignores loads of other possible ways AGI might happen. But I don't think of this as a necessary part of Ajeya's model; when I use it, I throw out the six bio anchors and just directly input my probability distribution over OOMs of compute. My distribution is informed by the bio anchors, of course, but that's not the only thing that informs it.

Replies from: adamShimi
comment by adamShimi · 2021-12-15T13:52:12.499Z · LW(p) · GW(p)

First, I want to clarify that I feel we're going into a more interesting place, where there's a better chance that you might find a point that invalidates Yudkowsky's argument, and can thus convince him of the value of the model.

But it's also important to realize that IMO, Yudkowsky is not just saying that biological anchors are bad. The more general problem (which is also developed in this post) is that predicting the Future is really hard. In his own model of AGI timelines, the factor that is basically impossible to predict until you can make AGI is the "how much resources are needed to build AGI".

So saying "let's just throw away the biological anchors" doesn't evade the general counterargument that to predict timelines at all, you need to find information on "how much resources are needed to build AGI", and that is incredibly hard. If you or Ajeya can argue for actual evidence in that last question, then yeah, I expect Yudkowsky would possibly update on the validity of the timeline estimates.

But at the moment, in this thread, I see no argument like that.

comment by So8res · 2021-12-02T18:28:20.181Z · LW(p) · GW(p)

My take on the exercise:

Is Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one's probability distribution over AGI, thereby moving out its median further away in time?

Short version: Nah. For example, if you were wrong by dint of failing to consider the right hypothesis, you can correct for it by considering predictable properties of the hypotheses you missed (even if you don't think you can correctly imagine the true research pathway or w/e in advance). And if you were wrong in your calculations of the quantities you did consider, correction will regress you towards your priors, which are simplicity-based rather than maxent.

Long version: Let's set aside for the moment the question of what the "correct" maxent distribution on AGI timelines is (which, as others have noted, depends a bit on how you dice up the space of possible years). I don't think this is where the action is, anyway.

Let's suppose that we're an aspiring Bayesian considering that we may have made some mistakes in our calculations. Where might those mistakes have been? Perhaps:

  1. We were mistaken about what we saw (and erroneously updated on observations that we did not make)?
  2. We were wrong in our calculations of quantities of the form P(e|H) (the likelihoods) or P(H) (the priors), or the multiplications thereof?
  3. We failed to consider a sufficiently wide space of hypotheses, in our efforts to complete our updating before the stars burn out?

Set aside for now that the correct answer is "it's #3, like we might stumble over #1 and #2 every so often but bounded reasoners are making mistake #3 day in and day out, it's obviously mostly #3", and take these one at a time:

Insofar as we were mistaken about what we saw, correcting our mistake should involve reverting an update (and then probably making a different update, because we saw something that we mistook, but set that aside). Reverting an update pushes us back towards our prior. This will often increase entropy, but not necessarily! (For example, if we thought we saw a counter-example to gravitation, that update might dramatically increase our posterior entropy, and reverting the update might revert us back to confident narrow predictions about phones falling.) Our prior is not a maxent prior but a simplicity prior (which is important if we ever want to learn anything at all).

Insofar as we were wrong in our calculations of various quantities, correcting our mistake depends on which direction we were wrong, and for which hypotheses. In practice, a reflectively stable reasoner shouldn't be able to predict the (magnitude-weighted) direction of their error in calculating P(e|H): if we know that we tend to overestimate that value when e is floobish, we can just bump down our estimate whenever e is floobish, until we stop believing such a thing (or, more intelligently, trace down the source of the systematic error and correct it, but I digress). I suppose we could imagine humbly acknowledging that we're imperfect at estimating quantities of the form P(e|H), and then driving all such estimates towards 1/n, where n is the number of possible observations? This doesn't seem like a very healthy way to think, but its effect is to again regress us towards our prior. Which, again, is a simplicity prior and not a maxent prior. (If instead we start what-iffing about whether we're wrong in our intuitive calculations that vaguely correspond to the P(H) quantities, and decide to try to make all our P(H) estimates more similar to each other regardless of H as a symbol of our virtuous self-doubt, then we start regressing towards maximum entropy. We correspondingly lose our ability to learn. And of course, if you're actually worried that you're wrong in your estimates of the prior probabilities, I recommend checking whether you think your P(H)-style estimates are too high or two low in specific instances, rather than driving all such estimates to uniformity. But also ¯\_(ツ)_/¯, I can't argue good priors into a rock.)

Insofar as we were wrong because we were failing to consider a sufficiently wide array of hypotheses, correcting our mistake depends on which hypotheses we're missing. Indeed, much of Eliezer's dialog seems to me like Eliezer trying to say "it's mistake #3 guys, it's always #3", plus "just as the hypothesis that we'll get AGI at 20 watts doesn't seem relevant because we know that the ways computers consume watts and the ways brains consume watts and they're radically different, so too can we predict that whatever the correct specific hypothesis for how the first human-attained AGIs consume compute, it will make the amount of compute that humans consume seem basically irrelevant." Like, if we don't get AGI till 2050 then we probably can't consider the correct specific research path, a la #3, but we can predict various properties of all plausible unvisualized paths, and adjust our current probabilities accordingly, in acknowledgement of our current #3-style errors.

In sum: accounting for wrongness should look less like saying "I'd better inject more entropy into my distributions", and more like asking "are my estimates of P(e|H) off in a predictable direction when e looks like this and H looks like that?". The former is more like sacrificing some of your hard-won information on the alter of the gods of modesty; the latter is more like considering the actual calculations you did and where the errors might reside in them. And even if you insist on sacrificing some of your information because maybe you did the calculations wrong, you should regress towards a simplicity prior rather than towards maximum entropy (which in practice looks like reaching for fewer and simpler-seeming deep regularities in the world, rather than pushing median AGI timelines out to the year 52,021), which is also how things will look if you think you're missing most of the relevant information. Though of course, your real mistake was #3, you're ~always committing mistake #3. And accounting for #3 in practice does tend to involve increasing your error bars until they are wide enough to include the sorts of curveballs that reality tends to throw at you. But the reason for widening your error bars there is to include more curveballs, not just to add entropy for modesty's sake. And you're allowed to think about all the predictable-in-advance properties of likely ballcurves even if you know you can't visualize-in-advance the specific curve that the ball will take.

In fact, Eliezer's argument reads to me like it's basically "look at these few and simple-seeming deep regularities in the world" plus a side-order of "the way reality will actually go is hard to visualize in advance, but we can still predict some likely properties of all the concrete hypotheses we're failing to visualize (which in this case invalidate biological anchors, and pull my timelines closer than 2051)", both of which seem to me like hallmarks of accounting for wrongness.

comment by davidad · 2021-12-09T20:59:09.349Z · LW(p) · GW(p)

In my view, the biological anchors and the Very Serious estimates derived therefrom are really useful for the following very narrow yet plausibly impactful purpose: persuading people, whose intuitive sense of biological anchors as lower bounds leads their AGI timeline to stretch out to 2100 or beyond, that they should really have shorter timelines. This is almost touched on in the dialogue where the OpenPhil character says “isn’t this at least a soft upper bound?” but Eliezer dismisses it as neither an upper nor a lower bound. I don’t disagree with Eliezer’s actual claim there, but I think he may be missing the value of putting well-researched soft upper bounds on a bunch of variables that generally well-read people often perceive as lower bounds on AGI’s arrival time—and guess to be much further away than they are. Even if those variables provide almost no relevant evidence to someone who has the appropriate kind of uncertainty about AGI’s compute requirements as a function of humanity’s knowledge about AI.

From this perspective, aggregating all the biological anchors with mixture weights tuned to produce a distribution whose median matches the psychological plausibility of Platt’s Law is a way to make the report as a whole Overton-window-compatible, while enabling readers to widen their distribution enough to put substantial probability mass on sooner years than the Overton window would have permitted, or even to decide for themselves to down-weight the larger biological anchors and shift their entire distribution sooner.

Replies from: ESRogs
comment by ESRogs · 2021-12-09T21:46:18.243Z · LW(p) · GW(p)

In my view, the biological anchors and the Very Serious estimates derived therefrom are really useful for the following very narrow yet plausibly impactful purpose

I don't understand why it's not just useful directly. Saying that the numbers are not true upper or lower bounds seems like it's expecting way too much!

They're not even labeled as bounds (at least in the headline). They're supposed to be "anchors".

Suppose you'd never done the analysis to know how much compute a human brain uses, or how much compute all of evolution had used. Wouldn't this report be super useful to you?

Sure, it doesn't directly tell you when TAI is going to come, because there's a separate thing you don't know, which is how compute-efficient our systems are going to be compared to the human brain. And also that translation factor is changing with time. But surely that's another quantity we can have a distribution over.

If there's some quantity that we don't know the value of, but we have at least one way to estimate it using some other uncertain quantities, why is it not useful to reduce our uncertainty about some of those other quantities?

This seems like exactly the kind of thing superforecasters are supposed to do. Or that an Eliezer-informed Bayesian rationalist is supposed to do. Quantify your uncertainty. Don't be afraid to use a probability distribution. Don't throw away relevant information, but instead use it to reduce your uncertainty and update your probabilities.

If Eliezer's point is just that the report shouldn't be taken as the gospel truth of when AI is going to come, then fine. Or if he just wants to highlight that there's still uncertainty over the translation factor between the brain's compute-efficiency and our ML systems' compute-efficiency, then that seems like a good point too.

But I don't really understand the point of the rest of the article. If I wanted to have any idea at all when TAI might come, then Moravec's 1988 calculations regarding the human brain seem super interesting. And also Somebody on the Internet's 2006 calculation of how much compute evolution had used.

Either of them would be wrong to think that their number precisely pins down the date. But if you started out not knowing whether to expect AGI in one year or in 10,000 years, then it seems like learning the human brain number and the all-of-evolution number should radically reduce your uncertainty.

It still doesn't reduce your uncertainty all the way, because we still don't know the compute-efficiency translation factor. But who said it reduced uncertainty all the way? Not OpenPhil.

Replies from: davidad, ESRogs
comment by davidad · 2021-12-10T06:05:10.146Z · LW(p) · GW(p)

Eliezer’s main point of his ~20k words isn’t really what I want to defend, but I will say a few words about how I would constructively interpret it. I think Eliezer’s main claim is that he has an intuitive system-1 model of the inhomogeneous Poisson process that emits working AGI systems, and that this model isn’t informed by the compute equivalents of biological anchors, and that his lack of being informed by that isn’t a mistake. I’m not sure if he’s actually making the stronger claim that anyone whose model is informed by the biological anchors is making a mistake, but if so, I don’t agree. My own model is somewhat informed by biological anchors; it’s more informed by what the TAI report calls “subjective impressiveness extrapolation”, extrapolations on benchmark performance, and some vague sense of other point processes that emit AI winters and major breakthroughs. Someone who has total Knightean uncertainty and no intuitive models of how AGI comes about would surely do well to adopt OpenPhil’s distribution as a prior.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-12-10T06:25:27.616Z · LW(p) · GW(p)

(I'm not sure whether your summary captures Eliezer's view, but strong-upvoted for what strikes me  as a reasonable attempt.)

comment by ESRogs · 2021-12-09T22:01:57.668Z · LW(p) · GW(p)

I suppose this kind of report is less useful to you the more you think the uncertainty lies in the compute-efficiency translation factor variable. If you think most of the orders of magnitude are in that value, you don't care so much about the biological anchors.

And maybe you're in that state if you think building AGI is just a matter of coming up with clever algorithms. But if you think there's just some as-yet undiscovered general reasoning [LW(p) · GW(p)] algorithm, and that's really the only thing that matters for AGI, then why are you at all impressed by increasingly capable AI systems that use more and more compute, like AlphaGo or GPT-3? It's (supposedly) not general reasoning, so why does it matter?

It seems to me like the compute-efficiency translation factor is just a perfectly reasonable non-mysterious quantity that we can also get information about and estimate. It's not going to be the same factor across all tasks, but it seems like we could at least get some idea of what quantities it plausibly could take on by looking at how much compute current (and old) systems are using to match human performance for various tasks, and looking at how that number varies across tasks and changes over time.

I wouldn't expect such analysis to leave the translation factor so uncertain that our total uncertainty is concentrated so overwhelmingly in that parameter that the biological anchors become useless.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-12-02T18:56:32.921Z · LW(p) · GW(p)

I feel like a big crux is whether Platt's Law is true:

Eliezer:  I mean, in fact, part of my actual sense of indignation at this whole affair, is the way that Platt's law of strong AI forecasts - which was in the 1980s generalizing "thirty years" as the time that ends up sounding "reasonable" to would-be forecasters - is still exactly in effect for what ends up sounding "reasonable" to would-be futurists, in fricking 2020 while the air is filling up with AI smoke in the silence of nonexistent fire alarms.

Didn't AI Impacts look into this a while back? See e.g. this dataset. Below is one of the graphs:


 

Replies from: matthew-barnett, Charlie Steiner, adamShimi
comment by Matthew Barnett (matthew-barnett) · 2021-12-03T20:03:23.248Z · LW(p) · GW(p)

It may help to visualize this graph with the line for Platt's Law drawn in.

Overall I find the law to be pretty much empirically validated, at least by the standards I'd expect from a half in jest Law of Prediction.

comment by Charlie Steiner · 2021-12-03T04:50:09.066Z · LW(p) · GW(p)

Wow, I'd forgotten about that prediction dataset! It seems like there's only even semi-decent data since 1994, but since then there does seem to be a plausible ~35-year median in the recorded points (even though, or perhaps because, the sampled distribution has been changing over time).

comment by adamShimi · 2021-12-14T14:28:05.841Z · LW(p) · GW(p)

Strongly disagree with this, to the extent that I think this is probably the least cruxy topic discussed in this post, and thus the comment is as wrong as is physically possible.

Remove Platt's law, and none of the actual arguments and meta-discussions changes. It's clearly a case of Yudkowsky going for the snappy "hey, see like even your new-and-smarter report makes exactly the same estimation predicted by a random psychological law" + his own frustration with the law still applying despite expected progress.

But once again, if Platt's law was so wrong that there was never in the history of the universe a single instance of people predicting strong AI and/or AGI in 30 years, this would have no influence whatsoever on the arguments in this post IMO.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-12-14T16:13:02.757Z · LW(p) · GW(p)
Strongly disagree with this, to the extent that I think this is probably the least cruxy topic discussed in this post, and thus the comment is as wrong as is physically possible.

Hahaha ok, interesting! If you are right I'll take some pride in having achieved that distinction. ;)

I interpreted Yudkowsky as claiming that Ajeya's model had enough free parameters that it could be made to predict a wide range of things, and that what was actually driving the 30-year prediction was a bunch of implicit biases rather than reality. Platt's Law is evidence for this claim. If it were false and e.g. the typical timelines forecast was only 10 years out, or 60, then we would have less reason to think that implicit biases were driving Ajeya's choice of parameters. Of course, Yudkowsky also made other arguments besides this one... but this one seemed to be there, and it seemed fairly important to me.

It's entirely possible I am misconstruing Yudkowsky's argument... you did recently do a reconstruction, so you probably understand it better than me. Care to elaborate?

Replies from: adamShimi
comment by adamShimi · 2021-12-14T17:54:46.579Z · LW(p) · GW(p)

I do think you are misconstruing Yudkowsky's argument. I'm going to give evidence (all of which are relatively strong IMO) in order of "ease of checkability". So I'll start with something anyone can check in a couple of minutes, and close by the more general interpretation that requires rereading the post in details.

Evidence 1: Yudkowsky flags Simulated-Eliezer as talking smack in the part you're mentioning

If I follow you correctly, your interpretation mostly comes from this part:

OpenPhil:  We did already consider that and try to take it into account: our model already includes a parameter for how algorithmic progress reduces hardware requirements.  It's not easy to graph as exactly as Moore's Law, as you say, but our best-guess estimate is that compute costs halve every 2-3 years.

Eliezer:  Oh, nice.  I was wondering what sort of tunable underdetermined parameters enabled your model to nail the psychologically overdetermined final figure of '30 years' so exactly.

OpenPhil:  Eliezer.

Note that this is one of the two times in this dialogue where Simulated-OpenPhil calls out Simulated-Eliezer. But remember that this whole dialogue was written by Yudkowsky! So he is flagging himself that this particular answer is a quip. Simulated-Eliezer doesn't reexplain it as he does most of his insulting points to Humbali; instead Simulated-Eliezer goes for a completely different explanation in the next answer.

Evidence 2: Platt's law is barely mentioned in the whole dialogue

"Platt" is used 6-times in the 20k words piece. "30 years" is used 8 times (basically at the same place where "Platt" is used").

Evidence 3: Humbali spends far more time discussing and justifying the "30 years" time than Simulated-OpenPhil. And Humbali is the strawman character, whereas Simulated-OpenPhil actually tries to discuss and to understand what Simulated Eliezer is saying.

Evidence 4: There is an alternative interpretation that takes into account the full text and doesn't use Platt's law at all: see this comment [LW(p) · GW(p)] on your other thread for my current best version of that explanation.

Evidence 5: Yudkowsky's whole criticism relying on a purely empirical and superficial similarity goes contrary to everything that I extracted from his writing in my recent post [AF · GW], and also to all the time he spends here discussing deep knowledge and the need for an underlying model.

 

So my opinion is that Platt's law is completely superfluous here, and is present here only because it gives a way of pointing to the ridiculousness of some estimates, and because to Yudkowsky it probably means that people are not even making interesting new mistakes but just the same mistakes over and over again. I think discussing it in this post doesn't add much, and weakens the post significantly as it allows reading like yours Daniel, missing the actual point.

comment by Rob Bensinger (RobbBB) · 2021-12-01T22:49:57.933Z · LW(p) · GW(p)

(This post was partly written as a follow-up to Eliezer's conversations with Paul [? · GW] and Ajeya [? · GW], so I've inserted it into the conversations sequence [? · GW].)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-12-02T00:04:31.311Z · LW(p) · GW(p)

It does fit well there, but I think it was more inspired by the person I met who thought I was being way too arrogant by not updating in the direction of OpenPhil's timeline estimates to the extent I was uncertain.

comment by Ben Pace (Benito) · 2021-12-12T22:56:35.479Z · LW(p) · GW(p)

For reference, here is a 2004 post by Moravec, that’s helpfully short, containing his account of his own predictions: https://www.frc.ri.cmu.edu/~hpm/project.archive/robot.papers/2004/Predictions.html

comment by RomanS · 2021-12-02T10:33:58.265Z · LW(p) · GW(p)

I consider naming particular years to be a cognitively harmful sort of activity; I have refrained from trying to translate my brain's native intuitions about this into probabilities, for fear that my verbalized probabilities will be stupider than my intuitions if I try to put weight on them.  What feelings I do have, I worry may be unwise to voice; AGI timelines, in my own experience, are not great for one's mental health, and I worry that other people seem to have weaker immune systems than even my own.  

The following metaphor helped me to understand the Eliezer's point:

Imagine you're forced to play the game of Russian roulette with the following rules:

  • every year on the day of Thanksgiving, you must put a revolver muzzle against your head and pull the trigger
  • the number of rounds in the revolver is a convoluted probabilistic function of various technological and societal factors (like the total knowledge in the field of AI, the number of TPUs owned by Google, etc).

How should you allocate your resources between the following two options? 

  • Option A: try to calculate the year of your death, by estimating the values for the technological and societal factors
  • Option B: try to escape the game.

It is clear that in this game, the option A is almost useless.

(but not entirely useless, as your escape plans might depend on the timeline).

comment by jacob_cannell · 2021-12-02T01:42:04.246Z · LW(p) · GW(p)

Which brings me to the second line of very obvious-seeming reasoning that converges upon the same conclusion - that it is in principle possible to build an AGI much more computationally efficient than a human brain - namely that biology is simply not that efficient, and especially when it comes to huge complicated things that it has started doing relatively recently.

 

Biological cells are computers which must copy bits to copy DNA.  So we can ask biology - how much energy do cells use to copy each base pair?  Seems they use just 4 ATP per base pair, or 1 ATP/bit, and thus within an OOM of the 'Landauer bound'.  Which is more impressive if you consider that the typically quoted 'Landauer bound' of kT ln 2 is overly optimistic as it only applies when the error probability is 50% or the computation takes infinity.  Useful computation requires at least somewhat higher speed than inf and reliability higher than none.

Brains have to pump thousands of ions in and out of each stretch of axon and dendrite, in order to restore their ability to fire another fast neural spike.  The result is that the brain's computation is something like half a million times less efficient than the thermodynamic limit for its temperature - so around two millionths as efficient as ATP synthase. 

The fact that cell replication operates at the Landauer bound already suggests a prior that neurons should be efficient.

The Landauer bound at room temp is ~ 0.03 eV.  Given that an electron is something of an obvious minimal unit for an electrical computer, the Landauer bound can be thought of as a 30 mV thermal noise barrier. Digital computers operate roughly 30x that for speed and reliability, but if you look at neuron swing voltages it's clear they are operating only ~3x or so above the noise voltage (optimizing hard for energy efficiency at the expense of speed).

Assuming 1hz * 10^14 synapses / 10 watts = 10^13 synops/watt, or about 10^7 electron charges at landauer voltage.  A synaptic op is at least doing analog signal multiplication, which requires far more energy/charges than a simple binary op - IIRC you need roughly 2^2K carriers and thus erasures to have precision equivalent to K-bit digital, so an 8-bit synaptic op (which IIRC is near where digital/analog mult energy intersects) would be 10^4 or 10^5. I had a relevant ref for this, can't find it now (but think you can derive it from the binomial distribution when std dev/precision is equivalent to 2^-8).

Now most synapses are probably smaller/cheaper than 8-bit equiv, but most of the energy cost involved is in pushing data down irreversible dissipative wires (just as true in the brain as it is in a GPU).  Now add in the additional costs of synaptic adjustment machinery for learning, cell maintenance tax, dendritic computation, etc and it's suddenly not clear at all that the brain is really far from energy efficient.

As further and final bayes evidence, Moore's Law is running out of steam as we run up against the limits of physics (for irreversible computation using irreversible wires) - and at best is just catching up to brain energy efficiency.

Replies from: RomanS, adele-lopez-1, jacob_cannell, Charlie Steiner
comment by RomanS · 2021-12-02T13:19:31.114Z · LW(p) · GW(p)

In general, efficiency at the level of logic gates doesn't translate into the efficiency at the CPU level. 

For example, imagine you're tasked to correctly identify the faces of your classmates from 1 billion photos of random human faces. If you fail to identify a face, you must re-do the job.

Your neurons are perfectly efficient. You have a highly optimized face-recognition circuitry.

Yet you'll consume more energy on the task than, say, Apple M1 CPU:

  • you'll waste at least 30% of your time on sleep
  • your highly optimized faces-recognition circuitry is still rather inefficient
  • you'll make mistakes, forcing you to re-do the job
  • you can't hold your attention long enough to complete such a task, even if your life depends on it

Even if the human brain is efficient on the level of neural circuits, it is unlikely to be the most efficient vessel for a general intelligence. 

In general, high-level biological designs are a crappy mess, mostly made of kludgy bugfixes to previous dirty hacks, which were made to fix other kludgy bugfixes (an example). 

And the newer is the design, the crappier it is. For example, compare:

  • the almost perfect DNA replication (optimized for ~10^9 years)
  • the faulty and biased human brain (optimized for ~10^5 years)

With the exception of a few molecular-level designs, I expect that human engineers can produce much more efficient solutions than the natural selection, in some cases -  orders of magnitude more efficient.

Replies from: jacob_cannell
comment by jacob_cannell · 2021-12-02T22:39:44.019Z · LW(p) · GW(p)

Human technology is rarely more efficient than biology along the quantitative dimensions that are important to biology, but human technology is not limited to building out of evolved wetware nanobots and can instead employ high energy manufacturing to create ultra durable materials that then enable very high energy density solutions. Our flying machines may not compete with birds in energy efficiency, but they harness power densities of a completely different scale to that available to biology.  Basically the same applies to computers vs brains.  AGI will outcompete human brains by brute scale, speed, and power rather than energy efficiency.

The human brain is just a scaled up primate brain, which is just a tweaked, more scalable mammal brain, but mammal brains have the same general architecture - which is closer to ~10^8 years old. It is hardly 'faulty and biased' - bias is in the mind [LW · GW].

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-12-03T03:52:18.830Z · LW(p) · GW(p)

A lot of the advantage of human technology is due to human technology figuring out how to use covalent bonds and metallic bonds, where biology sticks to ionic bonds and proteins held together by van der Waals forces (static cling, basically).  This doesn't fit into your paradigm; it's just biology mucking around in a part of the design space easily accessible to mutation error, while humans work in a much more powerful design space because they can move around using abstract cognition.

Replies from: jacob_cannell
comment by jacob_cannell · 2021-12-03T17:30:22.913Z · LW(p) · GW(p)

Covalent/metallic vs ionic bonds implements the high energy density vs wetware constrained distinction I was referring to, so we are mostly in agreement; that is my paradigm. But the evidence is pretty clear that "ionic bond and protein" tech does approach the Landauer limit - at least for protein computation. As for the brain, end of Moore's Law high end chip research is very much neuromorphic (memristor crossbars, etc), and some designs do claim perhaps 10x or so greater synop/J than the brain (roughly), but they aren't built yet. So if you had wider uncertainty in your claim, with most mass in the region of the brain being 1 to 3 OOMs from the limit, I probably wouldn't have commented, but for me that one claim distracted from your larger valid points.

comment by Adele Lopez (adele-lopez-1) · 2021-12-02T02:16:21.442Z · LW(p) · GW(p)

You're missing the point!

Your arguments apply mostly toward arguing that brains are optimized for energy efficiency, but the important quantity in question is computational efficiency! You even admit that neurons are "optimizing hard for energy efficiency at the expense of speed", but don't seem to have noticed that this fact makes almost everything else you said completely irrelevant!

Replies from: jacob_cannell
comment by jacob_cannell · 2021-12-02T02:28:05.977Z · LW(p) · GW(p)

The point of my comment (from my perspective ) was to focus very specifically on a few claims about biology/brains that I found questionable - relevant because the OP specifically was using energy as an efficiency metric. 

It's relevant because energy efficiency is one of the standard key measures of low level hardware substrate computational efficiency.

At a higher level if you are talking about overall efficiency for some complex task, well then software/algorithm efficiency is obviously super important which is a more complex subject.  And there are other low level metrics of importance as well such as feature size, speed, etc.

So what did you mean by computational efficiency?

Replies from: Vaniver
comment by Vaniver · 2021-12-03T17:48:34.070Z · LW(p) · GW(p)

FWIW I agree that bit also rang hollow to me--my sense was also that neurons are basically as energy-efficient as you can get--but by "computational efficiency" one means something like "amount of energy expended to achieve a computational result."

For example, imagine multiplying two four-digit numbers in your head vs. in a calculator. Each transistor operation in the calculator will be much more expensive than each neuron spike, however the calculator needs many fewer transistor operations than the brain needs neuron spikes, because the calculator is optimized to efficiently compute those sorts of multiplications whereas the brain needs to expensively emulate the calculator. Overall the calculator will spend fewer joules than the brain will. 

comment by jacob_cannell · 2021-12-02T01:43:35.361Z · LW(p) · GW(p)

All that being said - yes there is reversible computation, but it appears to be a much harder longer tech path (so probably not until after AGI).

comment by Charlie Steiner · 2021-12-03T05:24:08.141Z · LW(p) · GW(p)

This was super interesting. 

I don't think you can directly compare brain voltage to Landauer limit, because brains operate chemically, so we also care about differences in chemical potential (e.g. of sodium vs potassium, which are importantly segregated across cell membranes even though both have the same charge). To really illustrate this, we might imagine information-processing biology that uses no electrical charges, only signalling via gradients of electrically-neutral chemicals. I think this raises the total potential relative to Landauer and cuts down the amount of molecules we should estimate as transported per signal.

Replies from: jacob_cannell
comment by jacob_cannell · 2021-12-03T17:38:53.186Z · LW(p) · GW(p)

Neuron computation is electro-chemical through voltage gated ion channels. If the voltage is at or below the Landauer voltage, then ion motion through the gate is pure noise. As the voltage climbs above the Landauer limit, you start to get meaningful probabilistic state transitions (error rate below 50%) in reasonable time; you can then implement analog computation using many such unreliable carriers reducing error/noise through central limit binomial.

'Pure' chemical computation is protein machinery.  Biology evolved voltage based signaling for high speed longer distance communication/computation.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-12-02T17:55:16.447Z · LW(p) · GW(p)

I'm especially keen to hear responses to this point:

Eliezer:  Backtesting this viewpoint on the previous history of computer science, it seems to me to assert that it should be possible to:
Train a pre-Transformer RNN/CNN-based model, not using any other techniques invented after 2017, to GPT-2 levels of performance, using only around 2x as much compute as GPT-2;
Play pro-level Go using 8-16 times as much computing power as AlphaGo, but only 2006 levels of technology.
...
Your model apparently suggests that we have gotten around 50 times more efficient at turning computation into intelligence since that time; so, we should be able to replicate any modern feat of deep learning performed in 2021, using techniques from before deep learning and around fifty times as much computing power.
OpenPhil:  No, that's totally not what our viewpoint says when you backfit it to past reality.  Our model does a great job of retrodicting past reality.

My guess is that Ajeya / OpenPhil would say "The halving-in-costs every 2.5 years is on average, not for everything. Of course there are going to be plenty of things for which algorithmic progress has been much faster. There are also things for which algorithmic progress has been much slower. And we didn't pull 2.5 out of our ass, we got it from fitting to past data."

This seems to rebut the specific point EY made but also seems to support his more general skepticism about this method. What we care about is algorithmic progress relevant to AGI or APS-AI, and if that could be orders of magnitude faster or slower than halving every 2.5 years...

Replies from: ajeya-cotra
comment by Ajeya Cotra (ajeya-cotra) · 2021-12-04T02:56:33.480Z · LW(p) · GW(p)

The definition of "year Y compute requirements" is complicated in a kind of crucial way here, to attempt to a) account for the fact that you can't take any amount of compute and turn it into a solution for some task literally instantly, while b) capturing that there still seems to be a meaningful notion of "the compute you need to do some task is decreasing over time." I go into it in this section of part 1.

First we start with the "year Y technical difficulty of task T:"

  • In year Y, imagine a largeish team of good researchers (e.g. the size of AlphaGo's team) is embarking on a dedicated project to solve task T.
  • They get an amount of $ D dumped on them, which could be more $ than exists in the whole world, like 10 quadrillion or whatever.
  • With a few years of dedicated effort (e.g. 2-5), plus whatever fungible resources they could buy with D dollars (e.g. terms of compute and data and low-skilled human labor), can that team of researchers produce a program that solves task T? Here we assume that the fungible resources are infinitely available if you pay, so e.g. if you pay a quadrillion dollars you can get an amount of compute that is (FLOP/$ in year Y) * (1 quadrillion), even though we obviously don't have that many computers.

And the "technical difficulty of task T in year Y" is how big D is for the best plan that the researchers can come up with in that time. What I wrote in the doc was:

The price of the bundle of resources that it would take to implement the cheapest solution to T that researchers could have readily come up with by year Y, given the CS field’s understanding of algorithms and techniques at that time.

And then you have "year Y compute requirements," which is whatever amount of compute they'd buy with whatever portion of D dollars they spend on compute.

This definition is convoluted, which isn't ideal, but after thinking about it for ~10 hours it was the best I could do to balance a) and b) above.

With all that said, I actually do think that the team of good researchers could have gotten GPT-level perf with somewhat more compute a couple years ago, and AlphaGo-level perf with significantly more compute several years ago. I'm not sure exactly what ratio would be, but I don't think it's many OOMs.

The thing you said about it being an average with a lot of spread is also true. I think a better version of the model would have probability distributions over the algorithmic progress, hardware progress, and spend parameters; I didn't put that in because the focus of the report was estimating the 2020 compute requirements distribution. I did try some different values for those parameters in my aggressive and conservative estimates but in retrospect the spread was not wide enough on those.

Replies from: daniel-kokotajlo
comment by Rafael Harth (sil-ver) · 2021-12-02T18:16:42.770Z · LW(p) · GW(p)

Is Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one's probability distribution over AGI, thereby moving out its median further away in time?

My answer to this is that

First, no update whatsoever should take place because a probability distribution already expresses uncertainty, and there's no mechanism by which the uncertainty increased. Adele Lopez independently (and earlier) came up with the same answer [LW(p) · GW(p)].

Second, if there were an update -- say EY learned "one of the steps used in my model was wrong" -- this should indeed change the distribution. However, it should change it toward the prior distribution. It's completely unclear what the prior distribution is, but there is no rule whatsoever that says "more entropy = more prior-y" as shown by the fact that a uniform distribution over the next years has extremely high entropy yet makes a ludicrously confident prediction.

See also Information Charts [LW · GW] (second chapter). Being under-confident/losing confidence does not have to shift your probability toward the 50% mark; it shifts it toward the prior from whoever it was before, and the prior can be literally any probability. If it were universally held that AGI happens in 5 years, then this could be the prior, and updating downward on EY's gears-level model would update the probability toward quicker timelines.

comment by Adele Lopez (adele-lopez-1) · 2021-12-02T01:47:13.837Z · LW(p) · GW(p)

Going to try answering this one:

Humbali: I feel surprised that I should have to explain this to somebody who supposedly knows probability theory. If you put higher probabilities on AGI arriving in the years before 2050, then, on average, you're concentrating more probability into each year that AGI might possibly arrive, than OpenPhil does. Your probability distribution has lower entropy. We can literally just calculate out that part, if you don't believe me. So to the extent that you're wrong, it should shift your probability distributions in the direction of maximum entropy.

[Is Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one's probability distribution over AGI, thereby moving out its median further away in time?]

The uncertainty must already be "priced in" your probability distribution. So your distribution and hence your median shouldn't shift at all, unless you actually observe new relevant evidence of course.

Replies from: khafra
comment by khafra · 2021-12-02T13:14:09.923Z · LW(p) · GW(p)

The answer I came up with, before reading, is that the proper maxent distribution obviously isn't uniform over every planck interval from here until protons decay; it's also obviously not a gaussian with a midpoint halfway to when protons decay. But the next obvious answer is a truncated normal distribution. And that is not a thought conducive to sleeping well.

Replies from: wunan
comment by wunan · 2021-12-02T22:44:25.529Z · LW(p) · GW(p)

If it's a normal distribution, what's the standard deviation?

comment by adamShimi · 2023-01-10T20:47:03.133Z · LW(p) · GW(p)

In many ways, this post is frustrating to read. It isn't straigthforward, it needlessly insults people, and it mixes irrelevant details with the key ideas.

And yet, as with many of Eliezer's post, its key points are right.

What this post does is uncover the main epistemological mistakes made by almost everyone trying their hands at figuring out timelines. Among others, there is:

  • Taking arbitrary guesses within a set of options that you don't have enough evidence to separate
  • Piling on arbitrary assumption on arbitraty assumption, leading to completely uninformative outputs
  • Comparing biological processes to human engineering in term of speed, without noticing that the optimization path is the key variable (and the big uncertainty)
  • Forcing the prediction to fit within a massively limited set of distributions, biasing it towards easy to think about distributions rather than representative ones.

Before reading this post I was already dubious of most timeline work, but this crystallized many of my objections and issues with this line of work.

So I got a lot out of this post. And I expect that many people would if they spent the time I took to analyze it in detail. But I don't expect most people to do so, and so am ambivalent on whether this post should be included in the final selection.

comment by a gently pricked vein (strangepoop) · 2021-12-08T08:28:37.743Z · LW(p) · GW(p)

The "shut up"s and "please stop"s are jarring.

Definitely not, for example, norms to espouse in argumentation (and tbf nowhere does this post claim to be a model for argument, except maybe implicitly under some circumstances).

Yet there's something to it.

There's a game of Chicken arising out of the shared responsibility to generate (counter)arguments. If Eliezer commits to Straight, ie. refuses to instantiate the core argument over and over again (either explicitly, by saying "you need to come up with the generator" or implicitly, by refusing to engage with a "please stop."), then the other will be incentivized to Swerve, ie. put some effort into coming up with their own arguments and thereby stumble upon the generator.

This isn't my preferred way of coordinating on games of Chicken, since it is somewhat violent and not really coordination. My preferred way is to proportionately share the price of anarchy, which can be loosely estimated with some honest explicitness. But that's what (part of) this post is, a very explicit presentation of the consequences!

So I recoil less. It feels inviting instead, about a real human issue in reasoning. And bold, given all the possible ways to mischaracterize it as "Eliezer says 'shut up' to quantitative models because he has a pet theory about AGI doom".

But is this an important caveat to the fifth virtue, at least in simulated dialogue? That remains open for me.

comment by lc · 2021-12-14T13:35:09.834Z · LW(p) · GW(p)

Only semi related, but I really dislike the pretend argument rhetorical format. Public arguments are mostly competitive whether you like it or not. Whatever pedagogical benefits there are to explaining ideas as fake dialogue, your "simulations" of the actual words of real people you disagree with are immensely distracting. Throughout this post my attention is constantly interrupted by how socially awkward it is to write and publish an imaginary script in which you dunk on your intellectual opponents.

comment by p.b. · 2021-12-02T22:39:31.238Z · LW(p) · GW(p)

Biological anchors that focus on compute make sense insofar as the arrival of AGI is mostly a function of available compute. Which is the case if the algorithm is relatively simple and compute is the main ingredient of intelligence. 

Was is sensible to make that assumption in 1988? Maybe not. 

Is it sensible to make that assumption today? Well, during the last ten years a class of simple algorithms have made huge strides in all kinds of tasks that were "human-only" just a few years back. 

Furthermore, in many cases the performance of these algorithms scales very nicely with compute. 

To me that is the justification for taking biological anchors for AGI much more seriously this time around. 

comment by Ruby · 2021-12-08T19:53:43.449Z · LW(p) · GW(p)

Curated. Many times over the years I've seen analogies from biology used to produce estimates about AI timelines. This is the most thoroughly-argued case I've seen against them. While I believe some find the format uncomfortable, I'm personally glad to see Eliezer expressing his beliefs as he feels them, and think this is worth reading for anyone interested in predicting how AI will play out in coming years.

For those short on time, I recommend this summary [LW(p) · GW(p)] by Grant Demaree.

comment by Lavrov Andrey (lavrov-andrey) · 2022-09-03T23:25:26.860Z · LW(p) · GW(p)

I'm very new to Less Wrong in general, and to Eliezer's writing in particular, so I have a newbie question.

any more than you've ever argued that "we have to take AGI risk seriously even if there's only a tiny chance of it" or similar crazy things that other people hallucinate you arguing.

just like how people who helpfully try to defend MIRI by saying "Well, but even if there's a tiny chance..." are not thereby making their epistemic sins into mine.

I've read AGI Ruin: A List of Lethalities, and I legitimately have no idea what is wrong with "we have to take AGI risk seriously even if there's only a tiny chance of it". What is wrong with it? If anything, this seems like something I would say if I had to explain the gist of AGI Ruin: A List of Lethalities to someone else very briefly and using very few words.

The fact that I have absolutely no clue what is wrong with it probably means that I'm still very far from understanding anything about AGI and Eliezer's position.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2022-09-07T23:29:38.215Z · LW(p) · GW(p)

List of Lethalities isn't telling you "There's a small chance of this."  It's saying, "This will kill us.  We're all walking dead.  I'm sorry."

Replies from: lavrov-andrey
comment by Lavrov Andrey (lavrov-andrey) · 2022-09-08T18:57:38.232Z · LW(p) · GW(p)

Ok, thank you for the clarification!

comment by Ben Pace (Benito) · 2021-12-07T05:35:02.240Z · LW(p) · GW(p)

For indeed in a case like this, one first backs up and asks oneself "Is Humbali right or not?" and not "How can I prove Humbali wrong?"

Gonna write up some of my thoughts here without reading on, and post them (also without reading on).

I don’t get why Humbali’s objection has not already been ‘priced in’. Eliezer has a bunch of models and info and his gut puts the timeline at before 2050. I don’t think “what if you’re mistaken about everything” isn’t an argument Eliezer already considered, so I think it’s already priced into the prediction. You’re not allowed to just repeatedly use that argument until such a time as a person is maximally uncertain. (Nor are you allowed to keep using it until the person starts to agree with the position of the person in the room with more prestige.)

I also think this bit is blatantly over-the-top (to the extent of being a bit heavy-handed on Eliezer's part):

“Humbali:  Okay, so you're more confident about your AGI beliefs, and OpenPhil is less confident.  Therefore, to the extent that you might be wrong, the world is going to look more like OpenPhil's forecasts of how the future will probably look, like world GDP doubling over four years before the first time it doubles over one year, and so on.”

“Maximum-entropy” and “OpenPhil’s forecasts” are not the same distribution. It is not clear to me that “OpenPhil’s forecasts” look closer to maximum-entropy than EY’s. I imagine OpenPhil’s forecasts have a lot less on shorter amounts of time (e.g. ~5 years). 

(And potentially puts less on much longer amounts of time? Not sure here, but I do think that “smoothly follows the graph” tends to predict fewer strange things that would knock human civilization back 100 years as well as things that would bring AGI to us in 5 years.)

In fact, as I understand it from the essay above, “OpenPhil’s forecasts” involve taking maximum-entropy and then updating heavily on a number that isn’t relevant. I don’t know that I expect this to be more accurate than EY’s gut given a bunch of observations, so updating ‘toward it’ given uncertainty is wrong.

I don’t have a detailed knowledge of probability theory, and if you gave me a bunch of university exam questions using maximum-entropy distributions I’d quite likely fail to answer them correctly. It’s definitely on the table that Humbali knows it better than me, so I have tried to not have my arguments depend on any technical details. Something about my points might be wrong nonetheless because Humbali understands something I don’t.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-12-07T05:42:57.487Z · LW(p) · GW(p)

Hmm, alas, stopped reading too soon.

Is Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one's probability distribution over AGI, thereby moving out its median further away in time?

I'll add a quick answer: my gut says technically true, but that mostly I should just look at the arguments because they provide more weight than the prior. Strong evidence is common [LW · GW]. It seems plausible to me that the prior over 'number of years away' should make me predict it's more like 10 trillion years away or something, but that getting to observe the humans and the industrial revolution has already moved me to "likely in the next one thousand years" such that remembering this prior isn't very informative any more.

My answer: technically true but practically irrelevant.

comment by calef · 2021-12-02T05:22:01.479Z · LW(p) · GW(p)

By the standards of “we will have a general intelligence”, Moravec is wrong, but by the standards of “computers will be able to do anything humans can do”, Moravec’s timeline seems somewhat uncontroversially prescient? For essentially any task that we can define a measurable success metric, we more or less* know how to fashion a function approximator that’s as good as or better than a human.

*I’ll freely admit that this is moving the goalposts, but there’s a slow, boring path to “AGI” where we completely automate the pipeline for “generate a function approximator that is good at [task]”. The tasks that we don’t yet know how to do this for are increasingly occupying the narrow space of [requires simulating social dynamics of other humans], which, just on computational complexity grounds, may be significantly harder than [become superhuman at all narrowly defined tasks].

Relatedly, do you consider [function approximators for basically everything becoming better with time] to also fail to be a good predictor of AGI timelines for the same reasons that compute-based estimates fail?

Replies from: Eliezer_Yudkowsky, Pattern
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-12-05T05:32:01.032Z · LW(p) · GW(p)

Relatedly, do you consider [function approximators for basically everything becoming better with time] to also fail to be a good predictor of AGI timelines for the same reasons that compute-based estimates fail?

Obviously yes, unless you can take the metrics on which your graphs show steady progress and really actually locate AGI on them instead of just tossing out a shot-in-the-dark biological analogy to locate AGI on them.

comment by Pattern · 2021-12-04T17:25:56.567Z · LW(p) · GW(p)
Relatedly, do you consider [function approximators for basically everything becoming better with time] to also fail to be a good predictor of AGI timelines for the same reasons that compute-based estimates fail?

Past commentary by EY seems to consider this to be 'AI alarms' or 'the room is filling up with smoke but there's not fire alarm'.

comment by TekhneMakre · 2021-12-02T03:52:51.483Z · LW(p) · GW(p)

I have calculated the number of computer operations used by evolution to evolve the human brain - searching through organisms with increasing brain size - by adding up all the computations that were done by any brains before modern humans appeared. It comes out to 10^43 computer operations. AGI isn't coming any time soon!

And yet, because your reasoning contains the word "biological", it is just as invalid and unhelpful as Moravec's original prediction.

I agree that the conclusion about AGI not coming soon is invalid, so the following isn't exactly responding to what you say. But: ISTM the evolution thing is somewhat qualitatively different from Moravec or Stack More Layers, in that it softly upper bounds the uncertainty about the algorithmic knowledge needed to create AGI. IDK how easy it would be to implement an evolution that spits out AGI, but that difficulty seems like it should be less conceptually uncertain than the difficulty of understanding enough about AGI to do something more clever with less compute. Like, we could extrapolate out 3 OOMs of compute/$ per decade to get an upper bound: very probably AGI before 2150-ish, if Moore's law continues. Not very certain, or helpful if you already think AGI is very likely soon-ish, but it has nonzero content.

Replies from: Veedrac, Lanrian
comment by Veedrac · 2021-12-02T17:11:11.762Z · LW(p) · GW(p)

Like, we could extrapolate out 3 OOMs of compute/$ per decade to get an upper bound: very probably AGI before 2150-ish, if Moore's law continues.

Projecting Moore's Law to continue for 130 years more is almost surely incorrect. An upper bound that is conditional on that happening seems devoid of any actual predictive power. If we approach that level of computational power prior to AGI, it will almost surely be through some other mechanism than Moore's Law, and so would be arbitrarily detached from that timeline.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-12-02T17:18:34.179Z · LW(p) · GW(p)

Seems right, IDK. But still, that's a different kind of uncertainty than uncertainty about, like, the shape of algorithm-space.

Replies from: Veedrac
comment by Veedrac · 2021-12-02T17:31:07.739Z · LW(p) · GW(p)

Well Eliezer did explicitly state that “it was, predictably, a directional overestimate”. His concern was that it is a useless estimate, not that it didn't roughly bound the amount of computation required.

comment by Lukas Finnveden (Lanrian) · 2021-12-02T11:58:16.485Z · LW(p) · GW(p)

+1. I will also venture a guess that:

OpenPhil: Well, search by evolutionary biology is more costly than training by gradient descent, so in hindsight, it was an overestimate. Are you claiming this was predictable in foresight instead of hindsight?

is a strawman. I expect that the 2006 equivalent of OpenPhil would have recognised the evolutionary anchor as a soft upper bound. And I expect current OpenPhil to perfectly well understand the reasons for why this was predictable in foresight.

comment by TekhneMakre · 2021-12-02T03:05:40.295Z · LW(p) · GW(p)

(I'm taking the tack that "you might be wrong" isn't just already accounted for in your distributions, and you're now considering a generic update on "you might be wrong".)

so you're more confident about your AGI beliefs, and OpenPhil is less confident. Therefore, to the extent that you might be wrong, the world is going to look more like OpenPhil's forecasts of how the future will probably look

Informally, this is simply wrong: the specificity in OpenPhil's forecasts is some other specificity added to some hypothetical max-entropy distribution, and it can be a totally different sort of specificity than yours (rather than simply a less confident version of yours).

Formally: It's true that if you have a distribution P, and then update on "I might be wrong about the stuff that generated this distribution" to the distribution P', then P' should be higher entropy than P; so P' be more similar in that it's higher entropy to other distributions Q with higher entropy than P. That doesn't mean P' will be more similar than P in terms of what it says will happen, to some other higher entropy distribution Q. You could increase the entropy of P by spreading its mass over more outcomes that Q thinks are impossible; this would make P' further from Q than P is from Q, on natural measures of distance, e.g. KL-divergence (quasi-metric) or earth-mover or whatever. (For the other direction of KL divergence, you could have P reallocate mass away from areas Q thinks are likely; this would be natural if P and Q semi-agreed on a likely outcome, so that P' is more agnostic and has higher expected surprise according to Q. We can simultaneously have KL(P,Q) < KL(P',Q) and KL(Q,P) < KL(Q,P').)

(Also I think for basically any random variable X we can have |E_P(X) - E_Q(X)| < |E_P'(X) - E_Q(X)| for all degrees of wrongness giving P' from P.)

If you put higher probabilities on AGI arriving in the years before 2050, then, on average, you're concentrating more probability into each year that AGI might possibly arrive, than OpenPhil does.

This is true for years before 2050, but not necessarily for years after 2050, if your distribution e.g. has a thick tail and OpenPhil has a thin tail. It's true for all years if both of your distributions are just constant probabilities in each year, and maybe for some other similar kinds of families.

Your probability distribution has lower entropy [than OpenPhil's].

Not true in general, by the above. (It's true that Eliezer's distribution for "AGI before OpenPhil's median, yea/nay?" has lower entropy than OpenPhil's, but that would be true for any two distributions with different medians!)

So to the extent that you're wrong, it should shift your probability distributions in the direction of maximum entropy.

This seems right. (Which might be away from OpenPhil's distribution.) The update from P to P' looks like mixing in some less-specific prior. It's hard to say what it should be; it's supposed to be maximum-entropy given some background information, but IDK what the right way is to put a maximum entropy distribution on the space of years (for one thing, it's non-compact; for another, the indistinguishability of years that could give a uniform distribution or a Poisson distribution seems pretty dubious, and I'm not sure what to do if there's not a clear symmetry to fall back on). So I'm not even sure that the median should go up!

Is Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one's probability distribution over AGI,

Yes.

(I mean, if what you think you're maybe wrong about, is specifically some arguments that previously updated you to be less confident than some so-called "maximum"-entropy distribution, then you'd decrease your entropy when you put more weight on being wrong. This isn't generic wrongness, since it's failing to doubt the assumptions that went into the "maximum"-entropy distribution, which apparently you can coherently doubt, since previously some arguments left you to fall back on some other higher-entropy distribution based on weaker assumptions. But I guess it could look like you were supposed to fall back on the lower-entropy distribution, if that felt like the "background".)

thereby moving out its median further away in time?

Not necessarily. It depends what the max-entropy distribution looks like, i.e. what assumptions you're falling back on if you're wrong.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-12-02T03:37:13.042Z · LW(p) · GW(p)

Now having read the rest of the essay... I guess "maximum entropy" is just straight up confusing if you don't insert the "...given assumptions XYZ". Otherwise it sounds like there's such a thing as "the maximum-entropy distribution", which doesn't exist: you have to cut up the possible worlds somehow, and different ways of cutting them up produces different uniform distributions. (Or in the continuous case, you have to choose a measure in order to do integration, and that measure contains just as much information as a probability distribution; the uniform measure says that all years are the same, but you could also say all orders of magnitude of time since the Big Bang are the same, or something else.) So how you cut up possible worlds changes the uniform distribution, i.e. the maximum entropy distribution. So the assumptions that go into how you cut up the worlds, are determining your maximum entropy distribution.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-12-02T12:23:49.154Z · LW(p) · GW(p)

Hold on, I guess this actually means that for a natural interpretation of "entropy" in "generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one's probability distribution over AGI," that statement is actually false. If by "entropy" we mean "entropy according to the uniform measure", it's false. What we should really mean is entropy according to one's maximum entropy distribution (as the background measure), in which case the statement is true.

comment by TurnTrout · 2021-12-02T00:26:14.233Z · LW(p) · GW(p)

OK, I'll bite on EY's exercise for the reader, on refuting this "what-if":

Humbali:  Then here's one way that the minimum computational requirements for general intelligence could be higher than Moravec's argument for the human brain.  Since, after, all, we only have one existence proof that general intelligence is possible at all, namely the human brain.  Perhaps there's no way to get general intelligence in a computer except by simulating the brain neurotransmitter-by-neurotransmitter.  In that case you'd need a lot more computing operations per second than you'd get by calculating the number of potential spikes flowing around the brain!  What if it's true?  How can you know?

Let's step back and consider what kind of artifact the brain is. The human brain was "found" by evolution via via a selection process over a rather limited amount of time (between our most recent clearly-dumb ancestor, and anatomically modern humans). We have a local optimization process which optimizes over a relatively short timescale. This process found a brain which implements a generally intelligent algorithm.

In high-dimensional non-convex optimization, we have a way to describe algorithms found by a small amount of local optimization: "not even close to optimal." (Humans aren't even at a local optimum for inclusive-genetic-fitness due to our being mesa-optimizers.) But if the brain's algorithm isn't optimal, it trivially can't be the only algorithm that can produce general intelligence. Indeed, I would expect the fact that evolution found our algorithm at all to indicate that there were many possible such algorithms. 

There are many generally intelligent algorithms, and our brain only implements one, and it's just not going to be true that all of the others—or even the ones most likely to be discovered by AI researchers—are only implementable using (simulated) neurotransmitters.

Replies from: TAG
comment by TAG · 2021-12-04T16:50:50.100Z · LW(p) · GW(p)

There's no strong reason to think the brain does everything with a single algorithm.

Replies from: Pattern
comment by Pattern · 2021-12-04T17:22:41.168Z · LW(p) · GW(p)
In high-dimensional non-convex optimization, we have a way to describe algorithms found by a small amount of local optimization: "not even close to optimal."

Does this extend to 'a bunch of algorithms together'? (I.e. how does 'the brain does not do everything with a single algorithm' effect optimality?)

comment by CronoDAS · 2021-12-13T22:01:24.178Z · LW(p) · GW(p)

My counterargument to Humbali would go like this: "Suppose I tell you I've already taken you might be wrong into account. If you ask me to do it again, then you can just do the same thing to my more uncertain estimate - I'd end up in an infinite regress, and the argument would become a statement that no matter how uncertain you are, you should be more uncertain than that. And that is ridiculous. So I'm going to take you might be wrong into account only once. Which I already have. So shut up."

comment by Greg C (greg-colbourn) · 2021-12-07T11:46:25.405Z · LW(p) · GW(p)

Eliezer has short timelines, yet thinks that the current ML paradigm isn’t likely to be the final paradigm. Does this mean that he has some idea of a potential next paradigm? (Which he is, for obvious reasons, not talking about, but presumably expects other researchers to uncover soon, if they don’t already have an idea). Or is it that somehow the recent surprising progress within the ML paradigm (AlphaGo, AlphaFold, GPT3 etc) makes it more likely that a new paradigm that is even more algorithmically efficient is likely to emerge soon? (If the latter, I don’t see the connection).

Replies from: ann-brown, greg-colbourn
comment by Ann (ann-brown) · 2021-12-07T13:52:29.625Z · LW(p) · GW(p)

My reading was less that 'this is unlikely to be the final paradigm' and more that 'a paradigm shift is likely within the 30 years roughly estimated for this to be the final paradigm', and presumably most paradigm shifts would give us more progress rather than less to catch on in the field. With no prior knowledge of what that paradigm shift would be - maybe we manage to capture the 'soul' of a crow through emulation and infuse it into our starting points, or something similarly odd; maybe it is simple and obvious math in retrospect.

Replies from: greg-colbourn
comment by Greg C (greg-colbourn) · 2021-12-07T16:29:50.163Z · LW(p) · GW(p)

Ok, but Eliezer is saying that BOTH that his timelines are short (significantly less than 30 years) AND that he thinks ML isn't likely to be the final paradigm (this judging from not just this conversation, but the other, real, ones in this sequence [? · GW]).

Replies from: ann-brown
comment by Ann (ann-brown) · 2021-12-07T17:52:40.792Z · LW(p) · GW(p)

ML being the final paradigm would mean it would have to get 'to the end' before the next paradigm; the next paradigm will probably happen before 30 years; whatever the next paradigm is will be more impressive than the ML paradigm in some way - modest or dramatic. ML paradigm is pretty impressive, already, so anything notably more impressive than getting better at it seems likely to feel like a pretty sharp climb in capability.

comment by Greg C (greg-colbourn) · 2021-12-07T11:49:56.414Z · LW(p) · GW(p)

I note that mixture-of-experts is referred to as the kind of thing that in principle could shorten timelines, but in practice isn't likely to. Intuitively, and naively from neuroscience (different areas of the brain used for different things), it seems that mixture-of-experts should have a lot of potential, so I would like to see more detail on exactly why it isn't a threat.

comment by Dach · 2021-12-04T18:50:10.949Z · LW(p) · GW(p)

Is Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one's probability distribution over AGI, thereby moving out its median further away in time?

Writing my response in advance of reading the answer, for fun.

One thought is that this argument fails to give accurate updates to other people. Almost 100% of people would give AGI medians much further away than what I think is reasonable, and if this method wants to be a generally useful method for getting better guesses by recognizing your uncertainty then it needs to push them towards shorter timelines, to whatever degree I trust short timelines. 

In fact, this argument seems to only be useful for people whose AGI timelines are shorter than whatever the true timeline ends up being. If this were a real comment I would say this revealed behavior was unsurprising because the argument was generated to argue someone towards longer timelines and thus I couldn't trust it to give reality-aligned answers.

It strikes me that such a system probably doesn't exist? At the very least, I don't know how to turn my "generic uncertainty about maybe being wrong, without other extra premises" into anything. You need to actually exert intelligence, actually study the subject matter, to get better probability distributions. Suppose I have a random number generator that I think gives 0 10% of the time and 1 90% of the time. I can't improve this without exerting my intelligence- I can't just shift towards 50% 0 and 50% 1 with no further evidence. That would rely on the assumption that my uncertainty signals I'm biased away from the prior of 50% 0 and 50% 1, which is completely arbitrary.

Note that if you have reason to think your guess is biased away from the prior, you can just shift in the direction of the prior. In this case, if you think you're too confident relative to a random distribution over all years, which basically means you think your timeline is too short, you can shift in the direction of a random distribution over all years. 

In this context, you can't get better AGI estimates by just flattening over years. You need to actually leverage intelligence to discern reality. There are no magical "be smarter" functions that take the form of flattening your probability distribution at the end.

comment by Gurkenglas · 2021-12-02T19:16:03.766Z · LW(p) · GW(p)

My answer to the exercise, thought through before reading the remainder of the post but written down after seeing others do the same:

There is more than one direction of higher entropy to take, not necessarily towards OpenPhil's distribution. Also, entropy is relative to a measure and the default formula privileges the Lebesgue measure. Instead of calculating entropy from the probabilities for buckets 2022, 2023, 2024, ..., why not calculate it for buckets 3000-infinity, 2100-3000, 2025-2099, 2023-2024, ...?

Replies from: Charlie Steiner
comment by Charlie Steiner · 2021-12-03T05:28:09.021Z · LW(p) · GW(p)

Lol, was totally expecting "but entropy is ill-defined for continuous distributions except relative to some base measure."

comment by Lukas Finnveden (Lanrian) · 2021-12-02T12:06:03.219Z · LW(p) · GW(p)

It's very easy to construct probability distributions that have earlier timelines, that look more intuitively unconfident, and that have higher entropy than the bio-anchors forecast. You can just take some of the probability mass from the peak around 2050 and redistribute it among earlier years, especially years that are very close to the present, where bioanchors are reasonably confident that AGI is unlikely.

comment by romeostevensit · 2021-12-02T03:17:26.855Z · LW(p) · GW(p)

Spoiler tags are borked the way I'm using them.

anyway, another place to try your hand at calibration:

Humbali: No. You're expressing absolute certainty in your underlying epistemology and your entire probability distribution

no he isn't, why?

Humbali is asking for Eliezer to double count evidence. Consilience is hard if you don't do your homework on provenance of heuristic and not just naively counting up outputs who themselves also didn't do their homework.

Or in other words: "Do not cite the deep evidence to me, I was there when it was written"

And another place to take a whack at:

I'm not sure how to lead you into the place where you can dismiss that thought with confidence.

The particular cited example of statusy aliens seems like extreme hypothesis privileging, which often arises from reference class tennis.

comment by Lotus Cobra · 2021-12-03T04:29:23.631Z · LW(p) · GW(p)

Is Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one's probability distribution over AGI, thereby moving out its median further away in time?

 

I'll give the homework a shot.

Entropy is the amount of uncertainty inherent in your probability distribution, so generic uncertainty implies an increase in the entropy of one's probability distribution (whatever the eventual result is, it provides you more information than if would have if you were more certain beforehand). However, I do not think it follows that the median is therefore further in the future. Increasing one's generic uncertainty regarding the difficulty of creating AGI rules out knowing that an AGI requires more compute than Google can currently throw at the problem, then it requires ruling out knowing that an AGI can't be created using affordable 2021 consumer hardware, etc. High entropy probability distributions cannot rule out researchers having the final stroke of insight in 20 minutes, or the NSA having an airgapped AGI in their basement since 2017. Generic uncertainty means relying more heavily on your priors; it's not clear to me that this moves the estimate towards longer timelines.

I think.

Replies from: Peter Chatain
comment by Peter Chatain · 2021-12-03T22:02:35.222Z · LW(p) · GW(p)

I was thinking something similar, but I missed the point about the prior. To get intuition, I considered placing like 99% probability on one day in 2030. Then generic uncertainty spreads out this distribution both ways, leaving the median exactly what it was before. Each bit of probability mass is equally likely to move left or right when you apply generic uncertainty. Although this seems like it should be slightly wrong since the tiny bit of probability that it is achieved right now can't go back in time, so will always shift right. 

In other words, I think this will be right for this particular case, but an incorrect argument for when significant probability mass is on it happening very soon, or for when there is a very large amount of correcting done.

Replies from: davidad
comment by davidad · 2021-12-09T21:33:57.420Z · LW(p) · GW(p)

It’s worth noting that gradient descent towards maximum entropy (with respect to the Wasserstein metric and Lebesgue measure, respectively) is equivalent to the heat equation, which justifies your picture of probability mass diffusing outward. It’s also exactly right that if you put a barrier at the left end of the possibility space (i.e. ruling out the date of AGI’s arrival being earlier than the present moment), then this natural direction of increasing entropy will eventually settle into all the probability masses spreading to the right forever, so the median will also move to the right forever.

This isn’t the only way of increasing entropy, though—just a very natural one. Even if I have to keep the median fixed at 2050, by keeping fixed all the 0.5 probability mass to the left of 2050, I can still increase entropy forever by spreading out only the probability masses to the right of 2050 further towards infinity.

comment by joshc (joshua-clymer) · 2022-05-03T00:13:57.106Z · LW(p) · GW(p)

I have an objection to the point about how AI models will be more efficient because they don't need to do massive parallelization:

Massive parallelization is useful for AI models too and for somewhat similar reasons. Parallel computation allows the model to spit out a result more quickly. In the biological setting, this is great because it means you can move out of the way when a tiger jumps toward you. In the ML setting, this is great because it allows the gradient to be computed more quickly. The disadvantage of parallelization is that it means that more hardware is required. In the biological setting, this means bigger brains. Big brains are costly. They use up a lot of energy and make childbearing more difficult as the skulls need to fit through the birth canal.

In the ML setting, however, big brains are not as costly. We don't need to fit our computers in a skull. So, it is not obvious to me that ML models will do fewer computations in parallel than biological brains.

Some relevant information:

  • According to Scaling Laws for Neural Language Models, model performance depends strongly on model size but very weakly on shape (depth vs width).
  • An explanation for the above is that deep residual networks have been observed to behave like Ensembles of shallow networks.
  • GPT-3 uses 96 layers (decoder blocks). That isn't very many serial computations. If a matrix multiplication, softmax, relu, or vector addition count as atomic computations, then there are 11 serial computations per layer, so that's only 1056 serial computations. It is unclear how to compare this to biological neurons as each neuron may require a number of these serial computations to properly simulate.
  • PALM has 3 times more parameters than GPT-3 but only has 118 layers.
comment by FireStormOOO · 2021-12-19T06:46:59.174Z · LW(p) · GW(p)

I'm not sure if this is what Eliezer was taking a swing at, but this clicked while reading and I think it's a similar underlying logic error.  Apologies to those not already familiar with the argument I'm referencing:

There’s an stats argument that’s been discussed here before, the short version of which is roughly: "Half of all humans that will ever be born have probably already been born, we think that's about N people. At current birth rates we will make that many people again in X years and humanity will thus on average go extinct Soon™" (insert proper writeup here).  This fails is because it privileges a metric we have no principled reason to think is that special vs any of the equally sensible metrics we could have chosen and used to make the same argument - e.g. years anatomically modern humans have existed; years Earth life has existed.  We could make even more pessimistic estimates by suggesting we focus on e.g., the total energy consumed by human civilization.  Lest we think there’s a finite number of such properties to choose between, we can also combine any set of seemingly relevant metrics with arbitrary weights.

The statistical trick that argument rests on works in the motivating example because [serial numbers in industrially produced parts] is something we do have strong cause to think is tied to the current number of such items that exist and is the single best property we could choose to make that prediction.

These timeline estimates are failing for what feels like very similar reasons.  Why specifically *that* graph/formula for timelines, with those and only those metrics, and close to the factors chosen?  If we’re discussing a concrete model rather than a broad family of underconstrained models we’re very likely privileging the hypothesis and making a wild guess with extra steps.

Replies from: davidad
comment by davidad · 2021-12-19T16:35:34.320Z · LW(p) · GW(p)

Adding some references:

comment by Tapatakt · 2022-04-16T14:59:08.156Z · LW(p) · GW(p)

I don't understand the part that begins with "The last paradigm shifts were from..."

If last paradigm shifts including Stack More Layers all were towards "less knowledge, more compute", how can you draw a conclusion that "the world-ending AGI seems more likely to incorporate more knowledge and less brute force" right in the next paragraph?

I'm not necessarily disagree with the conclusion, but it seems not following from the previous reasoning.

comment by Dweomite · 2021-12-12T22:24:19.280Z · LW(p) · GW(p)

Editing error:

wasn't there something at the beginning about how, when you're unsure, you should be careful about criticizing people who are more unsure than you?

"more unsure than you" => "more sure than you"

(Assuming I have correctly understood that this is a reference to the warning that your uncertainty is in your map, not the territory, and that sometimes other people are more sure because they actually know more than you.)

comment by Greg C (greg-colbourn) · 2021-12-07T12:26:33.296Z · LW(p) · GW(p)

2 * 10^16 ops/sec*

(*) Two TPU v4 pods.

Shouldn’t this be 0.02 TPU v4 pods?

comment by Дмитрий Зеленский (dmitrii-zelenskii-1) · 2023-12-03T12:38:23.187Z · LW(p) · GW(p)

In real life, Reality goes off and does something else instead, and the Future does not look in that much detail like the futurists predicted

Half-joking - unless the futurist in question is Gerbert Wells. I think there was a quote that showed that he effectively predicted pixelization of early images along with many similar small-level details of early XXI century (although, of course, survivor bias for details probably influences my memory and the retelling I rely on).