Failures in technology forecasting? A reply to Ord and Yudkowsky

post by MichaelA · 2020-05-08T12:41:39.371Z · LW · GW · 19 comments

Contents

  Case: Rutherford and atomic energy
  Case: Fermi and chain reactions
  Case: Nuclear engineering more broadly
  Case: The Wrights and flight
  Sample size and representativeness
  Conclusion
None
19 comments

In The Precipice, Toby Ord writes:

we need to remember how quickly new technologies can be upon us, and to be wary of assertions that they are either impossible or so distant in time that we have no cause for concern. Confident denouncements by eminent scientists should certainly give us reason to be sceptical of a technology, but not to bet our lives against it - their track record just isn’t good enough for that.

I strongly agree with those claims, think they’re very important in relation to estimating existential risk [EA · GW],[1] and appreciate the nuanced way in which they’re stated. (There’s also a lot more nuance around this passage which I haven’t quoted.) I also largely agree with similar claims made in Eliezer Yudkowsky’s earlier essay There's No Fire Alarm for Artificial General Intelligence [LW · GW].

But both Ord and Yudkowsky provide the same set of three specific historical cases as evidence of the poor track record of such “confident denouncements”. And I think those cases provide less clear evidence than those authors seem to suggest. So in this post, I’ll:

I should note that I don’t think that these historical cases are necessary to support claims like those Ord and Yudkowsky make. And I suspect there might be better evidence for those claims out there. But those cases were the main evidence Ord provided, and among the main evidence Yudkowsky provided. So those cases are being used as key planks supporting beliefs that are important to many EAs and longtermists. Thus, it seems healthy to prod at each suspicious plank on its own terms, and update incrementally [LW · GW].

Case: Rutherford and atomic energy

Ord writes:

One night in 1933, the world’s pre-eminent expert on atomic science, Ernest Rutherford, declared the idea of harnessing atomic energy to be ‘moonshine’. And the very next morning Leo Szilard discovered the idea of the chain reaction.

Yudkowsky [LW · GW] also uses the same case to support similar claims to Ord’s.

However, in a footnote, Ord adds:

[Rutherford’s] prediction was in fact partly self-defeating, as its confident pessimism grated on Szilard, inspiring him to search for a way to achieve what was said to be impossible.

To me, the phrase “the very next morning” in the main text made this sound like an especially clear example of just how astoundingly off the mark an expert’s prediction could be. But it turns out that the technology didn’t just happen to be just about to be discovered in any case. Instead, there was a direct connection between the prediction and its undoing. In my view, that makes the prediction less “surprisingly” incorrect.[2]

Additionally, in the same footnote, Ord suggests that Szilard’s discovery may not even have been “the very next day”:

There is some debate over the exact timing of Szilard’s discovery and exactly how much of the puzzle he had solved

Finally, the same footnote states:

There is a fascinating possibility that [Rutherford] was not wrong, but deliberately obscuring what he saw as a potential weapon of mass destruction (Jenkins, 2011). But the point would still stand that confident public assertions of the leading authorities were not to be trusted.

This is a very interesting point, and I appreciate Ord acknowledging it. But I don’t quite agree with his last sentence. I’d instead say:

This possibility may weaken the evidence this case provides for the claim that we should often have limited trust in confident public assertions of the leading authorities. But it may not weaken the evidence this case provides for that claim in situations where it’s plausible that those assertions might be based less on genuine beliefs and more on a desire to e.g. mitigate attention hazards [LW · GW].

To be clear, I do think the Rutherford case provides some evidence for Ord and Yudkowsky’s claims. But I think the evidence is weaker than those authors suggested (especially if we focus only on Ord’s main text, but even when also considering his footnote).

Case: Fermi and chain reactions

Ord writes:

In 1939, Enrico Fermi told Szilard the chain reaction was but a ‘remote possibility’, and four years later Fermi was personally overseeing the world’s first nuclear reaction.

Both Yudkowsky [LW · GW] and Stuart Russell also use the same case to support similar claims to Ord’s.

However, in a footnote, Ord writes:

Fermi was asked to clarify the ‘remote possibility’ and ventured ‘ten percent’. Isidor Rabi, who was also present, replied, ‘Ten percent is not a remote possibility if it means that we may die of it. If I have pneumonia and the doctor tells me that there is a remote possibility that I might die, and it’s ten percent, I get excited about it’

I think that that footnote itself contains an excellent lesson (and an excellent quote) regarding failures of communication, and regarding the potential value of quantifying estimates (see also [EA · GW]). Relatedly, this case seems to support the claim that we should be wary of trusting qualitatively stated technology forecasts (even from experts).

But the footnote also suggests to me that this may not have been a failure of forecasting at all, or only a minor one. Hearing that Fermi thought that something that ended up happening was only a “remote possibility” seems to suggest he was wildly off the mark. But if he actually thought the chance was 10%, perhaps he was “right” in some sense - e.g., perhaps he was well-calibrated - and this just happened to be one of the 1 in 10 times that a 10% likely outcome occurs.

To know whether that’s the case, we’d have to see a larger range of Fermi’s predictions, and ensure we’re sampling in an unbiased way, rather than being drawn especially to apparent forecasting failures. I’ll return to these points later.

Case: Nuclear engineering more broadly

As evidence for claims similar to Ord’s, Yudkowsky [LW · GW] also uses the development of nuclear engineering more broadly (i.e., not just the above-mentioned statements by Rutherford and Fermi). For example, Yudkowsky writes:

And of course if you're not the Wright Brothers or Enrico Fermi, you will be even more surprised. Most of the world learned that atomic weapons were now a thing when they woke up to the headlines about Hiroshima.

And:

Fermi wasn't still thinking that net nuclear energy was impossible or decades away by the time he got to 3 months before he built the first pile, because at that point Fermi was looped in on everything and saw how to do it. But anyone not looped in probably still felt like it was fifty years away while the actual pile was fizzing away in a squash court at the University of Chicago

And, in relation to the development of AGI:

I do put a significant chunk of probability mass on "There's not much sign visible outside a Manhattan Project until Hiroshima," because that scenario is simple.

And:

I do predict in a very general sense that there will be no fire alarm [roughly, a clear signal of AGI being soon] that is not an actual running AGI--no unmistakable sign before then that everyone knows and agrees on, that lets people act without feeling nervous about whether they're worrying too early. That's just not how the history of technology has usually played out in much simpler cases like flight and nuclear engineering, let alone a case like this one where all the signs and models are disputed.

I largely agree with, or at least find plausible, Yudkowsky’s claims that there’ll be no “fire alarm” for AGI, and consider that a quite important insight. And I think the case of the development of nuclear weapons probably lends support to some version of those claims. But, for two reasons, I think Yudkowsky may paint an overly simple and confident picture of how this case supports his claims. (Note that I’m not an expert in this period of history, and most of what follows is based on some quick Googling and skimming.)

Firstly, I believe the development of nuclear weapons was highly militarised and secretive from early on, and to a much greater extent than AI development is. My impression is that the general consensus is that non-military labs such as DeepMind and OpenAI truly are leading the field, and that, if anything, AI development is worryingly open, rather than highly secretive (see e.g. here). So it seems there are relevant disanalogies between the case of nuclear weapons development and AI development (or indeed, most technological development), and that we should be substantially uncertain when trying to infer from the former case to the latter.

Secondly, I believe the group of people who did know about nuclear weapons before the bombing of Hiroshima, or who believed such weapons may be developed soon, was (somewhat) larger than one might think from reading Yudkowsky’s essay. In particular, the British, Germans, and Soviets each had their own nuclear weapons programs, and Soviet leaders knew of both the German and US efforts. And I don’t know of any clear evidence either way regarding whether scientists, policymakers, and members of the public who didn’t know of these programs would’ve assumed nuclear weapons were impossible or many decades away.

That said, it is true that:

So I do think this case provides evidence that technological developments can take a lot of people outside of various “inner circles” by surprise, at least in cases of highly secretive developments during wartime.

Case: The Wrights and flight

Ord writes:

The staggering list of eminent scientists who thought heavier-than-air flight to be impossible or else decades away is so well rehearsed as to be cliché.

I haven’t looked into that claim, and Ord gives no examples or sources. Yudkowsky [LW · GW] references this list of “famous people and scientists proclaiming that heavier-than-air flight was impossible”. That too gives no sources. As a spot check, I googled the first and last of the quotes on that list. The first appears to be substantiated. For the last, the first page of results seemed to all just be other pages also using the quote in “inspirational” ways without a giving source. Ultimately, I wouldn’t be surprised if there’s indeed a staggering list of such proclamations, but also wouldn’t be surprised if a large portion of them are apocryphal (even if “well rehearsed”).

Ord follows the above sentence with:

But fewer know that even Wilbur Wright himself predicted [heavier-than-air flight] was at least fifty years away - just two years before he invented it.

The same claim is also made by Yudkowsky.

But in a footnote, Ord writes:

Wilbur Wright explained to the Aero-club de France in 1908: ‘Scarcely ten years ago, all hope of flying had almost been abandoned; even the most convinced had become doubtful, and I confess that in 1901 I said to my brother Orville that men would not fly for 50 years. Two years later, we ourselves were making flight.’

Thus, it seems our evidence here is a retrospective account, from the inventor himself, of a statement he once made. One possible explanation of Wright’s 1908 comments is that a genuine, failed attempt at forecasting occurred in 1901. Here are three alternative possible explanations:

  1. Wright just made this story up after the fact, because the story makes his achievement sound all the more remarkable and unexpected.
  2. In 1908, Wright did remember making this statement, but this memory resulted from gradual distortions or embellishments of memory over time.
  3. The story is true, but it’s a story of one moment in which Wright said men would not fly for 50 years, as something like an expression of frustration or hyperbole; it’s not a genuine statement of belief or a genuine attempt at prediction.

Furthermore, even if that was a genuine prediction Wright made at the time, it seems it was a prediction made briefly, once, during many years of working on a topic, and which wasn’t communicated publicly. Thus, even if it was a genuine prediction, it may have little bearing on the trustworthiness in general of publicly made forecasts about technological developments.

Sample size and representativeness

Let’s imagine that all of my above points turn out to be unfounded or unimportant, and that the above cases turn out to all be clear-cut examples of failed technology forecasts by relevant experts. What, then, could we conclude from that?

Most importantly, that’d provide very strong evidence that experts saying a technological development is impossible or far away doesn’t guarantee that that’s the case. And it would provide some evidence that such forecasts may often be mistaken. In places, this is all Ord and Yudkowsky are claiming, and it might be sufficient to support some of their broader conclusions (e.g., that it makes sense to work on AI safety now). And in any case, those broader conclusions can also be supported by other arguments.

But it’s worth considering that these are just four cases, out of the entire history of predictions made about technological developments. That’s a very small sample.

That said, as noted in this post [LW · GW] and this comment [LW(p) · GW(p)], we can often learn a lot about what's typical of some population (e.g., all expert technology forecasts) using just a small sample from that population. What's perhaps more is whether the sample is representative of the population. So it's worth thinking about how one's sample was drawn from the population. I'd guess that the sampling process for these historical cases wasn't random, but instead looked more like one of the following scenarios:

  1. Ord and Yudkowsky had particular points to make, and went looking for past forecasts that supported those points.

  2. When they came to make their points, they already happened to know of those forecasts due to prior searches motivated by similar goals, either by themselves or by others in their communities.

    • E.g., I’d guess that Ord was influenced by Yudkowsky’s piece.
  3. They already happened to know of many past technology forecasts, and mentioned the subset that suited their points.

If so, then this was a biased rather than representative sample.[3] Thus, if so, we should be very careful in drawing conclusions from it about what is standard, rather than about the plausibility of such failures occurring on occasion.[4]

It’s also interesting to note that Ord, Yudkowsky, and Russell all wished to make similar points, and all drew from the same set of four cases. I would guess that this is purely because those authors were influenced by each other (or some other shared source). But it may also be because those cases are among the cases that most clearly support their points. And, above, I argued that each case provides less clear evidence for their points than one might think. So it seems possible that the repeated reaching for these murky examples is actually weak evidence that it’s hard to find clear-cut evidence of egregious technology forecasting failures by relevant experts. (But it’d be better to swap my speculation for an active search for such evidence, which I haven’t taken the time to do.)

Conclusion

Both Ord and Yudkowsky’s discussions of technology forecasting are much more nuanced than saying long-range forecasting is impossible or that we should pay no attention at all to experts’ technology forecasts. And they both cover more arguments and evidence than just the handful of cases discussed here. And as I said earlier, I largely agree with their claims, and overall see them as very important.

But both authors do prominently feature this small set of cases, and, in my opinion, imply these cases support their claims more clearly than they do. And that seems worth knowing, even if the same or similar claims could be supported using other evidence. (If you know of other relevant evidence, please mention it in the comments!)

Overall, I find myself mostly just very uncertain of the trustworthiness of experts’ forecasts about technological developments, as well as about how trustworthy forecasts could be given better conditions (e.g., better incentives, calibration training). And I don’t think we should update much based on the cases described by Ord and Yudkowsky, unless our starting position was “Experts are almost certainly right” (which, to be fair, may indeed be many people's implicit starting position, and is at times the key notion Ord and Yudkowsky are very valuably countering).

Note that this post is far from a comprehensive discussion on the efficacy, pros, cons, and best practices for long-range or technology-focused forecasting. For something closer to that, see Muehlhauser,[5] who writes, relevantly:

Most arguments I’ve seen about the feasibility of long-range forecasting are purely anecdotal. If arguing that long-range forecasting is feasible, the author lists a few example historical forecasts that look prescient in hindsight. But if arguing that long-range forecasting is difficult or impossible, the author lists a few examples of historical forecasts that failed badly. How can we do better?

I also discuss similar topics, and link to other sources, in my post introducing a database of existential risk estimates [EA · GW].

This is one of a series of posts I plan to write that summarise, comment on, or take inspiration from parts of The Precipice. You can find a list of all such posts here [EA(p) · GW(p)].

This post is related to my work with Convergence Analysis, but the views I expressed in it are my own. My thanks to David Kristoffersson [EA · GW] and Justin Shovelain [EA · GW] for useful comments on an earlier draft.


  1. A related topic for which these claims are relevant is the likely timing and discontinuity of AI developments. This post will not directly focus on that topic. Some sources relevant to that topic are listed here [LW(p) · GW(p)]. ↩︎

  2. This may not reduce the strength of the evidence this case provides for certain claims. One such claim would be that we should put little trust in experts’ forecasts of AGI being definitely a long way off, and this is specifically because such forecasts may themselves annoy other researchers and spur them to develop AGI faster. But Ord and Yudkowsky didn’t seem to be explicitly making claims like that. ↩︎

  3. I don’t mean “biased” as a loaded term, and I’m not applying that term to Ord or Yudkowsky, just to their samples of historical cases. ↩︎

  4. Basically, I’d guess that the evidence we have is “people who were looking for examples of experts making technology forecasting mistakes were able to find 4 cases as clear-cut as the cases Yudkowsky gives”. This evidence seems almost as likely conditional on “experts’ technology forecasts are right 99% of the time” as conditional on “experts’ technological forecasts are right 1% of the time” (for two examples of possible hypotheses we might hold). Thus, I don’t see the evidence as providing much Bayesian evidence about which of those hypotheses is more likely. I wouldn’t say the same if our evidence was instead “the first four of the four cases we randomly sampled were as clear-cut as the cases Yudkowsky gives”. ↩︎

  5. Stuart Armstrong’s recent post Assessing Kurzweil predictions about 2019: the results [LW · GW] is also somewhat relevant. ↩︎

19 comments

Comments sorted by top scores.

comment by gwern · 2020-05-08T18:56:28.814Z · LW(p) · GW(p)

But it turns out that the technology didn’t just happen to be just about to be discovered in any case. Instead, there was a direct connection between the prediction and its undoing. In my view, that makes the prediction less “surprisingly” incorrect.[2]

? If you are trying to make the point that technology is unpredictable, an example of a 'direct connection' and backfiring is a great example because it shows how fundamentally unpredictable things are: he could hardly have expected that his dismissal would spur an epochal discovery and that seems extremely surprising; this supports Ord & Yudkowsky, it doesn't contradict them. And if you're trying to make a claim that forecasts systematically backfire, that's even more alarming than O/Y's claims, because it means that expert forecasts will not just make a nontrivial number of errors (enough to be an x-risk concern) but will be systematically inversely correlated with risks and the biggest risks will come from the ones experts most certainly forecast to not be risks...

But the footnote also suggests to me that this may not have been a failure of forecasting at all, or only a minor one. Hearing that Fermi thought that something that ended up happening was only a “remote possibility” seems to suggest he was wildly off the mark. But if he actually thought the chance was 10%, perhaps he was “right” in some sense - e.g., perhaps he was well-calibrated - and this just happened to be one of the 1 in 10 times that a 10% likely outcome occurs.

So to summarize that case study criticism: everything you factchecked was accurate and you have no evidence of any kind that the Fermi story does not mean what O/Y interpret it as.

Furthermore, even if that was a genuine prediction Wright made at the time, it seems it was a prediction made briefly, once, during many years of working on a topic, and which wasn’t communicated publicly. Thus, even if it was a genuine prediction, it may have little bearing on the trustworthiness in general of publicly made forecasts about technological developments.

So to summarize that case study criticism: everything you factchecked was accurate and you have no evidence of any kind that the Wright story does not mean what O/Y interpret it as.

Let’s imagine that all of my above points turn out to be unfounded or unimportant

Of the 4 case studies you criticize, your claim actually supports them in the first one, you agree the second one is accurate, and you provide only speculations and no actual criticisms in the third and fourth.

Replies from: ESRogs, MichaelA, MichaelA, MichaelA, ESRogs
comment by ESRogs · 2020-05-08T20:55:05.245Z · LW(p) · GW(p)
So to summarize that case study criticism: everything you factchecked was accurate and you have no evidence of any kind that the Fermi story does not mean what O/Y interpret it as.

Re: the Fermi quote, this does not seem to be an accurate summary to me. Learning that Fermi meant 10% when he said "remote possibility" does in fact change how I view that incident.

Replies from: dxu
comment by dxu · 2020-05-09T00:09:51.563Z · LW(p) · GW(p)

If it were common knowledge that any hyperbolic language experts use when speaking about the unlikelihood of AGI (e.g. Andrew Ng's statement "worrying about AI safety is like worrying about overpopulation on Mars") actually corresponded to a 10% subjective probability of AGI, things would look very different than they currently do.

More generally, on a strategic level there is very little difference between a genuinely incorrect forecast and one that is "correct", but communicated so poorly as to create a wrong impression in the mind of the listener. If the state of affairs is such that anyone who privately believes there is a 10% chance of AGI is incentivized to instead report their assessment as "remote", the conclusion of Ord/Yudkowsky holds, and it remains impossible to discern whether AGI is imminent by listening to expert forecasts.

(I also don't believe that said experts, if asked to translate their forecasts to numerical probabilities, would give a median estimate anywhere near as high as 10%, but that's largely tangential to the discussion at hand.)

Furthermore, and more importantly, however: I deny that Fermi's 10% somehow detracts from the point that forecasting the future of novel technologies is hard.

Four years prior to overseeing the world's first nuclear reaction, Fermi believed that it was more likely than not that a nuclear chain reaction was impossible. Setting aside for a moment the question of whether Fermi's specific probability assignment was negligible, or merely small, what this indicates is that the majority of the information necessary to determine the possibility of a nuclear chain reaction was in fact unavailable to Fermi at the time he made his forecast. This does not support the idea that making predictions about technology is easy, any more than it would have if Fermi had assigned 0.001% instead of 10%!

More generally, the specific probability estimate Fermi gave is nothing more than a red herring, one that is given undue attention by the OP. The relevant factor to Ord/Yudkowsky's thesis is how much uncertainty there is in the probability distribution of a given technology--not whether the mean of said distribution, when treated as a point estimate, happens to be negligible or non-negligible. Focusing too much on the latter not only obfuscates the correct lesson to be learned, but also sometimes leads to nonsensical results.

Replies from: ESRogs, MichaelA
comment by ESRogs · 2020-05-09T07:35:15.637Z · LW(p) · GW(p)
If it were common knowledge that any hyperbolic language experts use when speaking about the unlikelihood of AGI (e.g. Andrew Ng's statement "worrying about AI safety is like worrying about overpopulation on Mars") actually corresponded to a 10% subjective probability of AGI, things would look very different than they currently do.

Did you have anything specific in mind about how things would look different? I have the impression that you're trying to imply something in particular, but I'm not sure what it is.

EDIT: Also, I'm a little confused about whether you mean to be agreeing with me or disagreeing. The tone of your comment sounds like disagreeing, but content-wise it seems like we're both agreeing that if someone is using language like "remote possibility" to mean 10%, that is a noteworthy and not-generally-obvious fact.

Maybe you're saying that experts do frequently obfuscate with hyperbolic language, s.t. it's not surprising to you that Fermi would mean 10% when he said "remote possibility", but that this fact is not generally recognized. (And things would look very different if it was.) Is that it?

Replies from: MichaelA
comment by MichaelA · 2020-05-09T09:37:41.586Z · LW(p) · GW(p)

Minor thing: did you mean to refer to Fermi rather than to Rutherford in that last paragraph?

Replies from: ESRogs
comment by ESRogs · 2020-05-09T21:37:00.153Z · LW(p) · GW(p)

Oops, yes. Fixed.

comment by MichaelA · 2020-05-09T01:12:59.561Z · LW(p) · GW(p)

I think this comment raises some valid and interesting points. But I'd push back a bit on some points.

(Note that this comment was written quickly, so I may say things a bit unclearly or be saying opinions I haven't mulled over for a long time.)

More generally, on a strategic level there is very little difference between a genuinely incorrect forecast and one that is "correct", but communicated so poorly as to create a wrong impression in the mind of the listener.

There's at least some truth to this. But it's also possible to ask experts to give a number, as Fermi was asked. If the problem is poor communication, then asking experts to give a number will resolve at least part of the problem (though substantial damage may have been done by planting the verbal estimate in people's mind). If the problem is poor estimation, then asking for an explicit estimate might make things worse, as it could give a more precise incorrect answer for people to anchor on. (I don't know of specific evidence that people anchor more on numerical than verbal probability statements, but it seems likely me. Also, to be clear, despite this, I think I'm generally in favour of explicit probability estimates in many cases.)

If the state of affairs is such that anyone who privately believes there is a 10% chance of AGI is incentivized to instead report their assessment as "remote", the conclusion of Ord/Yudkowsky holds, and it remains impossible to discern whether AGI is imminent by listening to expert forecasts.

I think this is true if no one asks the experts for explicit numerical estimate, or if the incentives to avoid giving such estimates are strong enough that experts will refuse when asked. I think both of those conditions hold to a substantial extent in the real world and in relation to AGI, and that that is a reason why the Fermi case has substantial relevance to the AGI case. But it still seems useful to me to be aware of the distinction between failures of communication vs of estimation, as it seems we could sometimes get evidence that discriminates between which of those is occurring/common, and that which is occurring/common could sometimes be relevant.

Furthermore, and more importantly, however: I deny that Fermi's 10% somehow detracts from the point that forecasting the future of novel technologies is hard.

I definitely wasn't claiming that forecasting the future of novel technologies is easy, and I didn't interpret ESRogs as doing so either. What I was exploring was merely whether this case is a clear case of an expert's technology forecast being "wrong" (and, if so, "how wrong"), and what this reflects about the typical accuracy of expert technology forecasts. They could conceivably be typically accurate even if very very hard to make, if experts are really good at it and put in lots of effort. But I think more likely they're often wrong. The important question is essentially "how often", and this post bites off the smaller question "what does the Fermi case tell us about that".

As for the rest of the comment, I think both the point estimates and the uncertainty are relevant, at least when judging estimates (rather than making decisions based on them). This is in line with my understanding from e.g. Tetlock's work. I don't think I'd read much into an expert saying 1% rather than 10% for something as hard to forecast as an unprecedented tech development, unless I had reason to believe the expert was decently calibrated. But if they have given one of those numbers, and then we see what happens, then which number they gave makes a difference to how calibrated vs uncalibrated I should see them as (which I might then generalise in a weak way to experts more widely).

That said, I do generally think uncertainty of estimates is very important, and think the paper you linked to makes that point very well. And I do think one could easily focus too much on point estimates; e.g., I wouldn't plug Ord's existential risk estimates [EA · GW] into a model as point estimates without explicitly representing a lot of uncertainty too.

comment by MichaelA · 2020-05-09T00:41:32.825Z · LW(p) · GW(p)

So to summarize that case study criticism: everything you factchecked was accurate and you have no evidence of any kind that the Fermi story does not mean what O/Y interpret it as.

I find this a slightly odd sentence. My "fact-check" was literally just quoting and thinking about Ord's own footnote. So it would be very odd if that resulted in discovering that Ord was inaccurate. This connects back to the point I make in my first comment response: this post was not a takedown.

My point here was essentially that:

  • I think the main text of Ord's book (without the footnote) would make a reader think Fermi's forecast was very very wrong.
  • But in reality it is probably better interpreted as very very poorly communicated (which is itself relevant and interesting), and either somewhat wrong or well-calibrated but unlucky.

I do think the vast majority of people would think "remote possibility" means far less than 10%.

comment by MichaelA · 2020-05-09T00:36:12.923Z · LW(p) · GW(p)

Firstly, I think I should say that this post was very much not intended as anything like a scathing takedown of Ord and Yudkowsky's claims or evidence. Nor did I mean to imply I'm giving definitive arguments that these cases provide no evidence for the claims made. I mean this to have more of a collaborative than combative spirit in relation to Ord and Yudkowsky's projects.

My aim was simply to "prod at each suspicious plank on its own terms, and update incrementally." And my key conclusion is that the authors, "in my opinion, imply these cases support their claims more clearly than they do" - not that the cases provide no evidence. It seems to me healthy to question evidence we have - even for conclusions we do still think are right, and even when our questions don't definitively cut down the evidence, but rather raise reasons for some doubt.

It's possible I could've communicated that better, and I'm open to suggestions on that front. But from re-reading the post again, especially the intro and conclusion, it does seem I repeatedly made explicit statements to this effect. (Although I did realise after going to bed last night that the "And I don’t think we should update much..." sentence was off, so I've now made that a tad clearer.)

I've split my response about the Rutherford and Fermi cases into different comments.

Of the 4 case studies you criticize, your claim actually supports them in the first one, you agree the second one is accurate, and you provide only speculations and no actual criticisms in the third and fourth.

Again, I think this sentence may reflect interpreting this post as much more strident and critical than it was really meant to be. I may be wrong about the "direct connection" thing (discussed in a separate comment), but I do think I raise plausible reasons for at least some doubt about (rather than outright dismissal of) the evidence each case provides, compared to how a reader might initially interpret them.

I'm also not sure what "only speculations and no actual criticisms" would mean. If you mean e.g. that I don't have evidence that a lot of Americans would've believed nuclear weapons would exist someday, then yes, that's true. I don't claim otherwise. But I point out a potentially relevant disanalogy between nuclear weapons development and AI development. And I point give some evidence that "the group of people who did know about nuclear weapons before the bombing of Hiroshima, or who believed such weapons may be developed soon, was (somewhat) larger than one might think from reading Yudkowsky’s essay." And I do give some evidence for that, as well as pointing out that I'm not aware of evidence either way for one relevant point.

Also, I don't really claim any of this post to be "criticism", at least in the usual fairly negative sense, just "prod[ding] at each suspicious plank on its own terms". I'm explicitly intending to make only relatively weak claims, really.

And then the "Sample size and representativeness" section provides largely separate reasons why it might not make much sense to update much on these cases (at least from a relatively moderate starting point) even ignoring those reasons for doubt. (Though see the interesting point 3 in Daniel Kokotajlo's comment [LW(p) · GW(p)].)

comment by MichaelA · 2020-05-09T00:40:26.450Z · LW(p) · GW(p)

? If you are trying to make the point that technology is unpredictable, an example of a 'direct connection' and backfiring is a great example because it shows how fundamentally unpredictable things are: he could hardly have expected that his dismissal would spur an epochal discovery and that seems extremely surprising; this supports Ord & Yudkowsky, it doesn't contradict them. And if you're trying to make a claim that forecasts systematically backfire, that's even more alarming than O/Y's claims, because it means that expert forecasts will not just make a nontrivial number of errors (enough to be an x-risk concern) but will be systematically inversely correlated with risks and the biggest risks will come from the ones experts most certainly forecast to not be risks...

I think this paragraph makes valid points, and have updated in response (as well as in response to ESRogs indication of agreement). Here are my updated thoughts on the relevance of the "direct connection":

  • I may be wrong about the "direct connection" slightly weakening the evidence this case provides for Ord and Yudkowsky's claims. I still feel like there's something to that, but I find it hard to explain it precisely, and I'll take that, plus the responses from you and ESRogs, as evidence that there's less going on here than I think.
  • I guess I'd at least stand by my literal phrasings in that section, which were just about my perceptions. But perhaps those perceptions were erroneous or idiosyncratic, and perhaps to the point where they weren't worth raising.
  • That said, it also seems possible to me that, even if there's no "real" reason why a lack of direction connection should make this more "surprising", many people would (like me) erroneously feel it does. This could perhaps be why Ord writes "the very next morning" rather than just "the next morning".
  • Perhaps what I should've emphasised more is the point I make in footnote 2 (which is also in line with some of what you say):

This may not reduce the strength of the evidence this case provides for certain claims. One such claim would be that we should put little trust in experts’ forecasts of AGI being definitely a long way off, and this is specifically because such forecasts may themselves annoy other researchers and spur them to develop AGI faster. But Ord and Yudkowsky didn’t seem to be explicitly making claims like that.

  • Interestingly, Yudkowsky makes similar point in the essay this post partially responds to: "(Also, Demis Hassabis was present, so [people at a conference who were asked to make a particular forecast] all knew that if they named something insufficiently impossible, Demis would have DeepMind go and do it [and thereby make their forecast inaccurate].)" (Also, again, as I noted in this post, I do like that essay.)

  • I think that that phenomenon would cause some negative correlation between forecasts and truth, in some cases. I expect that, for the most part, that'd get largely overwhelmed by a mixture of random inaccuracies and a weak tendency towards accuracy. I wouldn't claim that, overall, "forecasts systematically backfire".

comment by ESRogs · 2020-05-08T20:59:24.033Z · LW(p) · GW(p)
Of the 4 case studies you criticize, your claim actually supports them in the first one, you agree the second one is accurate

I agree with you about which way the direct connection points. But I think the point about Rutherford's potential deliberate obfuscation is significant.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-05-08T15:30:42.829Z · LW(p) · GW(p)

Well said!

1. I wonder if the AI Impacts discontinuities project might serve as a sort of random sample. We selected technological advances on the basis of how trend-busting they were, and while that may still be correlated with how surprising they were to experts at the time, the sample is surely less biased than the Ord/Yudkowsky/Russell set of examples. It would be interesting to see how many of the discontinuities were surprising, and how much, to the people at the time.

2. AI research is open today... kinda. I mean, OpenAI has recently started to close. But I don't think this is super relevant to the overall discussion, because the issue is about whether AI research will be open or closed at the time crazy AGI advances start happening. I think it's quite likely that it will be closed, and thus (absent leaks and spies) the only people who have advance warning will be project insiders. (Unless it is a slow takeoff, such that people outside can use trend extrapolation to not be surprised by the stuff coming out of the secret projects)

3. Even cherry-picked anecdotes can be useful evidence, if they are picked from a set that is sufficiently small. E.g. if there are only 100 'important' technological advancements, and nukes and planes are 2 of them, then that means there's at least a 2% chance that another important technological advancement will catch almost the whole world by complete surprise. I don't have a principled way of judging importance, but it seems plausible to me that if you asked me to list the 100 most important advancements I'd have nukes and flight in there. Heck, they might even make the top 20.

Replies from: MichaelA
comment by MichaelA · 2020-05-08T23:49:23.039Z · LW(p) · GW(p)

Thanks!

  1. Yes, I think that'd be very interesting. If this post could play a tiny role in prompting something like that, I'd be very happy. And that's the case whether or it supports some of Ord and Yudkowsky's stronger claims/implications (i.e., beyond just that experts are sometimes wrong about these things) - it just seems it'd be good to have some clearer data, either way. ETA: But I take this post by Muelhauser as indirect evidence that it'd be hard to do at least certain versions of this.

  2. Interesting point. I think that, if we expect AGI research to be closed during it shortly before really major/crazy AGI advances, then the nuclear engineering analogy would indeed have more direct relevance, from that point on. But it might not make the analogy stronger until those advances start happening. So perhaps we wouldn't necessarily strongly expect major surprises about when AGI development starts having major/crazy advances, but then expect a closing up and major surprises from that point on. (But this is all just about what that one analogy might suggest, and we obviously have other lines of argument and evidence too.)

  3. That's a good point; I hadn't really thought about that explicitly, and if I had I think I would've noted it in the post. But that's about how well the cases provide evidence about the likely inaccuracy of expert forecasts (or surprisingness) of the most important technology developments, or something like that. This is what Ord and Yudkowsky (and I) primarily care about in this context, as their focus when they make these claims is AGI. But they do sometimes (at least in my reading) make the claims as if they apply to technology forecasts more generally.

comment by Ben Pace (Benito) · 2020-05-09T01:45:39.202Z · LW(p) · GW(p)

I really liked reading this post, and that you documented looking into key claims and doing quick epistemic spot checks.

But it’s worth considering that these are just four cases, out of the entire history of predictions made about technological developments. That’s a very small sample.

I was expecting you might say something like this. I do want to point out how small sample sizes are incredibly useful. In How To Measure Anything, Hubbard gives the example of estimating the weight in grams of the average jelly baby. Now, if you're like me, by that point you've managed to get through life not really knowing how much a gram is. What's the right order of magnitude? 10 grams? 1000 grams? 0.1 grams? What Hubbard points out is that if I tell you that a random one out of the packet weighs 190g, suddenly you have a massive amount of information about even what order of magnitude is sensible. The first data point is really valuable for orienting in a very wide open space.

I haven't the time to respond in detail, so I'll just mention that in this situation, I think that looking into predictions that a technology cannot be made and is far out, and finding some things confidently saying it's very far out while it's in fact days/months away, is very surprising and has zoomed me in quite substantially on what sorts of theories make sense here, even given many of the qualifiers above.

Replies from: MichaelA
comment by MichaelA · 2020-05-09T09:54:27.739Z · LW(p) · GW(p)

I do want to point out how small sample sizes are incredibly useful.

Yeah, I think that point is true, valuable, and relevant. (I also found How To Measure Anything very interesting and would recommend it, or at least this summary by Muehlhauser [LW · GW], to any readers of this comment who haven't read those yet.)

In this case, I think the issue of representativeness is more important/relevant than sample size. On reflection, I probably should've been clearer about that. I've now edited that section to make that clearer, and linked to this comment and Muehlhauser's summary post. So thanks for pointing that out!

comment by Thomas Kwa (thomas-kwa) · 2020-05-09T03:33:10.018Z · LW(p) · GW(p)

The thesis of this post is too close to "EY/TO use case studies as evidence that we should be skeptical of expert forecasts in general, but they are actually only evidence against expert forecasts that..."

  • could cause someone to make a discovery they wouldn't otherwise (Rutherford)
  • have skewed public perception due to communication difficulties (Fermi)
  • are in a field where the most advanced research is done in secret by militaries
  • aren't public/have dubious sourcing (Wright)

I doubt the flaws in these examples are more severe than the flaws with the average 20th century expert prediction. Too many degrees of freedom to find some reason we shouldn't count them as "serious" predictions. Are current experts better at stating predictions so we can't give them a pass if they turn out very wrong? How much should we expect this to improve current expert predictions? I think this is the true crux, which this post doesn't touch on.

Replies from: MichaelA
comment by MichaelA · 2020-05-09T23:34:23.995Z · LW(p) · GW(p)

I actually quite like your four dot points, as summaries of some distinguishing features of these cases. (Although with Rutherford, I'd also highlight the point about whether or not the forecast is likely to reflect genuine beliefs, and perhaps more specifically whether or not a desire to mitigate attention hazards may be playing a role.)

And I think "Too many degrees of freedom to find some reason we shouldn't count them as "serious" predictions" gets at a good point. And I think it's improved my thinking on this a bit.

Overall, I think that your comment would be a good critique of this post if this post was saying or implying that these case studies provide no evidence for the sorts of claims Ord and Yudkowsky want to make. But my thesis was genuinely just that "I think those cases provide less clear evidence [not no evidence] than those authors seem to suggest". And I genuinely just aimed to "Highlight ways in which those cases may be murkier than Ord and Yudkowsky suggest" (and also separately note the sample size and representativeness points).

It wasn't the case that I was using terms like "less clear" and "may be murkier" to be polite or harder-to-criticise (in a motte-and-bailey sort of way), while in reality I harboured or wished to imply some stronger thesis; instead, I genuinely just meant what I said. I just wanted to "prod at each suspicious plank on its own terms", not utterly smash each suspicious plank, let alone bring the claims resting atop them crashing down.

That may also be why I didn't touch on what you see as the true crux (though I'm not certain, as I'm not certain I know precisely what you mean by that crux). This post had a very specific, limited scope. As I noted, "this post is far from a comprehensive discussion on the efficacy, pros, cons, and best practices for long-range or technology-focused forecasting."

To sort-of restate some things and sort-of address your points: I do think each of the cases provide some evidence in relation to the question (let's call it Q1) "How overly 'conservative' (or poorly-calibrated) do experts' quantitative forecasts of the likelihood or timelines of technology tend to be, under "normal" conditions?" I think the cases provide clearer evidence in relation to questions like how overly 'conservative' (or poorly-calibrated) do experts' forecasts of the likelihood or timelines of technology tend to be, when...

  • it seems likelier than normal that the forecasts themselves could change likelihoods or timelines
    • I'm not actually sure what we'd base that on. Perhaps unusually substantial prominence or publicity of the forecaster? Perhaps a domain in which there's a wide variety of goals that could be pursued, and which one is pursued has sometimes been decided partly to prove forecasts wrong? AI might indeed be an example; I don't really know.
  • it seems likelier than normal that the forecaster isn't actually giving their genuine forecast (and perhaps more specifically, that they're partly aiming to mitigate attention hazards)
  • cutting-edge development on the relevant tech is occurring in highly secretive or militarised ways

...as well as questions about poor communication of forecasts by experts.

I think each of those questions other than Q1 are also important. And I'd agree that, in reality, we often won't know much about how far conditions differ from "normal conditions", or what "normal conditions" are really like (e.g., maybe forecasts are usually not genuine beliefs). These are both reasons why the "murkiness" I highlight about these cases might not be that big a deal in practice, or might do something more like drawing our attention to specific factors that should make us wary of expert predictions, rather than just making us wary in general.

In any case, I think the representativeness issue may actually be more important. As I note in footnote 4, I'd update more on these same cases (holding "murkiness" constant) if they were the first four cases drawn randomly, rather than through what I'd guess was a somewhat "biased" sampling process (which I don't mean as a loaded/pejorative term).

comment by River (frank-bellamy) · 2020-05-09T02:32:28.341Z · LW(p) · GW(p)

Minor factual quibble: Truman didn't spend "years" as Vice President, he was sworn in as Vice President less than 3 months before being sworn in as President. Which possibly makes the fact that he wasn't read in on the Manhattan Project a little less surprising.

Replies from: MichaelA
comment by MichaelA · 2020-05-09T04:14:50.866Z · LW(p) · GW(p)

Oh, good point, thanks! I had assumed Truman was VP for the whole time FDR was in office. I've now (a) edited the post to swap "during his years as Vice President" with "during his short time as Vice President", and (b) learned a fact I'm a tad embarrassed I didn't already know!