Posts

Comments

Comment by Mirza Herdic (mirza-herdic) on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2023-04-02T20:15:07.841Z · LW · GW

I think that we should expect it to be extremely unlikely for an agent to have the required power and willingness to kill 3^^^^3 people. The shortest explanation for why we should believe this is that any agent that gathers "power" will increase his ability to influence the state of the universe. As his "power" grows, the number of possible states that the agent can successfully attain scales like the factorial with respect to power. If we denote power with x, then the number of possible states that the agent can attain is proportional to x!. Now, the expected utility of the threat (the utility of the threat becoming true times the probability of the threat being done) becomes proportional to -x * (1 / x!). From this, we can see that actually the greater the threat, the smaller its absolute magnitude. From this, we can conclude that the expected utility of the threat is 0 for a sufficiently large x.

As for why the number of attainable states is factorial? Well, I think that it is self-explanatory that as an agent's power over space, time, matter, and energy grows, it can arrange them in many different arrangements, and there is roughly x! of those arrangements, where x is the number of parameters it is working with. This is not mathematically rigorous as it is a blog post and I am not a mathematician but I think this should be logical to anyone thinking about this problem.

Comment by Mirza Herdic (mirza-herdic) on We Change Our Minds Less Often Than We Think · 2023-02-01T06:48:59.650Z · LW · GW

I would say that the study by Griffin and Tversky is incomplete. The way I see it, we have an inner "scale" of the validity of evidence and decide based on that. As was pointed out in one of the previous posts, we should bet on an event 100% of the time if the event is more likely than the alternatives. Something similar is happening here, where if we are more than 50% sure that job A is better than job B, we should pick job A. Given that the participants were 66% sure, this would mean that there is a low a priori probability for them to change their minds. If we assume a normal distribution for the "scale" of evidence in our brains, we get to the fact that there is indeed a very small chance of the participants changing their minds, obviously being a 2 sigma event.

 

If my hypothesis is correct, in a new study in which the participants are a priori 50% sure about the job they want to choose, they should change their minds more than the 4% in this example, much more actually. Given that we do have a mechanism in our minds that makes us stick to our decisions, especially in cases when we are around 50% sure, which stops us from changing our minds constantly and behaving erratically, I would hypothesize that the participants wouldn't change their minds 50% of the time, but it would probably be in the region of 40% - 50%. I would also expect that when the participants are 80% or 90% sure a priori, that they would still change their minds in maybe 1% or 2% of the cases because we are usually more sure of our answers than we should be.

 

All in all, I think that it is perfectly rational that if you are 66% sure about something you make that decision in 90% to 99% of the cases. Being 80% sure about something should not mean that you should choose the alternative in 20% of the cases.