[EA xpost] The Rationale-Shaped Hole At The Heart Of Forecasting

post by dschwarz · 2024-04-02T17:40:44.278Z · LW · GW · 2 comments

This is a link post for https://forum.effectivealtruism.org/posts/qMP7LcCBFBEtuA3kL/the-rationale-shaped-hole-at-the-heart-of-forecasting

Contents

2 comments

An excerpt from the above that will be relevant to this crowd:

Ben Landau-Taylor of Bismarck Analysis wrote a piece on March 6 called “Probability Is Not A Substitute For Reasoning”, citing a piece where he writes:

There has been a great deal of research on what criteria must be met for forecasting aggregations to be useful, and as Karger, Atanasov, and Tetlock argue, predictions of events such as the arrival of AGI are a very long way from fulfilling them.

Last summer, Tyler Cowen wrote on AGI ruin forecasts:

Publish, publish, not on blogs, not long stacked arguments or six hour podcasts or tweet storms, no, rather peer review, peer review, peer review, and yes with models too... if you wish to convince your audience of one of the most radical conclusions of all time…well, more is needed than just a lot of vertically stacked arguments.

Widely divergent views and forecasts on AGI persist, leading to FRI’s excellent adversarial collaboration on forecasting AI risk [EA · GW] this month. Reading it, I saw… a lot of vertically stacked arguments.

<...>

Tyler Cowen again:

If the chance of existential risk from AGI is 99 percent, or 80 percent, or even 30 percent, surely some kind of modeled demonstration of the basic mechanics and interlocking pieces is possible.

It is possible! It’s much harder than modeling geopolitics, where the future more resembles the past. I’m partial to Nuño’s base rates of technological disruption [LW · GW] which led him to posit “30% that AI will undergo a ‘large and robust’ discontinuity, at the rate of maybe 2% per year if it does so.” The beauty of his analysis is that you can inspect it. I think Nuño and I would converge, or get close to it, if we hashed it out.

Other great examples include Tom Davidson’s compute-centric model [LW · GW], Roodman's “materialist” model, and Joe Carlsmith’s six ingredients model [LW · GW]. These models are full of prose, yet unlike pure reasoning, they have facts you can substitute and numbers you can adjust that directly change the conclusion.

I bet that if the FRI adversarial collaborators had drawn from Sempere’s, Davidson’s, Roodman’s, or Carlsmith’s models, they would have converged more. A quick ctrl+f of the 150 page FRI report shows only two such references - both to Davidson’s... appearance on a podcast! The 2022 GJ report used the Carlsmith model to generate the questions, but it appears none of the superforecasters appealed to any models of any kind, not even Epoch data, in their forecasts.

This goes a long way towards explaining the vast gulf between superforecasters and AI researchers on AGI forecasts. The FRI effort was a true adversarial collaboration, yet as Scott wrote, “After 80 hours, the skeptical superforecasters increased their probability of existential risk from AI! All the way from 0.1% to . . . 0.12%.”

<...>

If other orgs and platforms join us and FRI in putting more emphasis on rationales, we’ll see more mainstream adoption of the conclusions we draw.

2 comments

Comments sorted by top scores.

comment by Seth Herd · 2024-04-03T16:04:22.051Z · LW(p) · GW(p)

I think a major issue is that the people who would be best at predicting AGI usually don't want to share their rationale.

Gears-level models of the phenomenon in question are highly useful in making accurate predictions. Those with the best models are either worriers who don't want to advance timelines, or enthusiasts who want to build it first. Neither has an incentive to convince the world it's coming soon by sharing exactly how that might happen.

The exceptions are people who have really thought about how to get from AI to AGI, but are not in the leading orgs and are either uninterested in racing or want to attract funding and attention for their approach. Yann LeCun comes to mind.

Imagine trying to predict the advent of heavier-than-air flight without studying either birds or mechanical engineering. You'd get predictions like the ones we saw historically - so wild as to be worthless, except those from the people actually trying to achieve that goal.

Replies from: dschwarz
comment by dschwarz · 2024-04-04T14:01:02.361Z · LW(p) · GW(p)

(Responded to the version of this on the EA Forum post.)