Was Eliezer Yudkowsky right to give himself 10% to succeed with HPMoR in 2010?
post by momom2 (amaury-lorin) · 2022-06-14T07:00:29.955Z · LW · GW · 1 commentThis is a question post.
Contents
Answers 4 korin43 None 1 comment
In Hero Licensing, Eliezer Yudkowsky states that in 2010, he would have given himself 10% chance of HPMoR being very successful.
He then goes on to explain why he thought that instead of something lower; but I don't understand why he thought that instead of something higher: given that HPMoR did end up successful, it looks like it actually had higher than 10% chance of happening. Or maybe I'm by coincidence in the 1 in 10 worlds where it ended up successful? How can I tell?
If I try to use Bayes's law: let's call A "HPMoR is successful in 2022" and B "In 2010, there's at most 10% chance that HPMoR will be successful".
I want to update B based on A: P(B|A) = P(B)P(A|B)/P(A)
P(A|B) = 10%, P(A) = ~1
So it appears 1/10 as likely that EY was correct in 2010 about his prediction now than it appeared then.
Is this calculation correct?
Obviously, I'm not that interested about this particular result. In general, how do I improve my own prediction-making based on evidence?
Answers
The issue is that probabilities for something that will either happen or not don't really make sense in a literal way (any single macro-scale event has ~0% of ~100% chance of happening).
I think when EY says he had a 10% chance of HPMoR being successful, the claim should be taken in the context of calibration, not that he's actually going to attempt it 10 times and then see how often he succeeds:
https://www.lesswrong.com/tag/calibration [? · GW]
To see if it's accurate, you'd need to take some other predictions in his 10% probability bucket, find out how often they all happened, and then see how far that is from 10%. I'm not sure if EY does this, but you can see an example from Scott here: https://slatestarcodex.com/2020/04/08/2019-predictions-calibration-results/
1 comment
Comments sorted by top scores.