Posts

The longest training run 2022-08-17T17:18:40.387Z
Announcing Epoch: A research organization investigating the road to Transformative AI 2022-06-27T13:55:51.451Z
Grokking “Semi-informative priors over AI timelines” 2022-06-12T22:17:06.207Z
Grokking “Forecasting TAI with biological anchors” 2022-06-06T18:58:31.604Z
Compute Trends — Comparison to OpenAI’s AI and Compute 2022-03-12T18:09:55.039Z
Compute Trends Across Three eras of Machine Learning 2022-02-16T14:18:30.406Z
Estimating training compute of Deep Learning models 2022-01-20T16:12:43.497Z
What role should evolutionary analogies play in understanding AI takeoff speeds? 2021-12-11T01:19:09.686Z

Comments

Comment by anson.ho on Grokking “Semi-informative priors over AI timelines” · 2022-06-13T13:03:16.783Z · LW · GW

To make sure I'm understanding you correctly, do you think the largest problem comes from (1) thinking of AGI development as a sequence of Bernoulli trials, or (2) each Bernoulli trial having constant probability, or (3) both?

It's not obvious to me that (1) is hugely problematic - isn't Laplace's rule of succession commonly applied to forecasting previously unseen events? Are you perhaps arguing that there's something particular to AGI development such that thinking of it as a series of Bernoulli trials is completely invalid?

I'm more sympathetic to your criticism of (2), but I'll note that Davidson actually relaxes this assumption in his model extensions, and further argues (in appendix 12) that the effect of the assumption is actually pretty small - most of the load of the model is carried by the first-trial probability and the reference classes used to generate it.