Posts

How much does the risk of dying from nuclear war differ within and between countries? 2022-10-11T11:55:34.789Z
In forecasting, how do accuracy, calibration and reliability relate to each other? 2022-09-11T12:04:18.531Z
A bayesian updating on expert opinions 2022-08-30T11:56:19.925Z
How could the universe be infinitely large? 2022-07-13T13:45:23.715Z
Wisdom of the crowds? 2021-09-29T19:14:37.644Z

Comments

Comment by amarai (Maiwaaro23) on In forecasting, how do accuracy, calibration and reliability relate to each other? · 2022-09-21T21:50:46.737Z · LW · GW

Hey, thanks for the answer and sorry for my very late response. In particular thanks for the link to the OpenPhil report, very interesting! To your question - I now changed my mind again and tentatively think that you are right. Here's how I think about it now, but I still feel unsure whether I made a reasoning error somewhere:

There's some distribution of your probabilistic judgments that shows how frequently you report a given probability in a proposition that turned out to be true. It might show e.g. that for true propositions you report 90% probability in 10% of all your probability judgements. This might be the case even if you are perfectly calibrated as long as, for false propositions, you report 90% in (10/9) % of all your probability judgements. Then, it would still be the case that 90% of your 90% probability judgements turn out to be true - and hence you are perfectly calibrated at 90%. 

So, given these assumptions, what would the Bayes factor for your 90% judgement in "rain today" be? 
P(you give rain 90%|rain) should be 10% since I'm sort of randomly sampling your 90% judgement from the distribution where your 90% judgement occurs 10% of the time. For the same reason, P(you give rain 90%|no rain) = 10/9 %. Therefore, the Bayes factor is 10%/(10/9)% = 9. 

I suspect that my explanation is overly complicated feel free to point out more elegant ones :)

Comment by amarai (Maiwaaro23) on Cognitive Science/Psychology As a Neglected Approach to AI Safety · 2022-09-20T16:17:02.692Z · LW · GW

Sort of late to the party, but I'd like to note for any aspiring cognitive science student browsing the archives, that I doubt this comment is accurate. I'm studying cognitive science and, in practice, because of the flexibility we have and because cogsci has maths/cs as constitute disciplines, this largely means taking maths, AI or computer science (largely the same courses that people from these field take). These disciplines make up >60% of my studies. Of course, I'm behind people who focus on maths or cs exclusively in terms of maths and cs, but I don't see a good reason to think that we lack the ability to think with precision and rigor to an extent that we can contribute to AI safety. Prove me wrong :)

Comment by amarai (Maiwaaro23) on How could the universe be infinitely large? · 2022-07-13T14:36:12.060Z · LW · GW

Do you know how common a position this is among cosmologists?

Comment by amarai (Maiwaaro23) on How could the universe be infinitely large? · 2022-07-13T14:32:44.041Z · LW · GW

Thanks! The StackExchange discussion is actually very good!

Comment by amarai (Maiwaaro23) on The Long Long Covid Post · 2022-02-12T22:40:58.404Z · LW · GW

Thanks a lot for the post! I'd be curious to hear if people here significantly disagree with your conclusion that "One can act as if serious Long Covid will occur in ~0.2% of boosted Covid cases."? If so, on what grounds?