Posts
Comments
But the main (if not only) argument you make for many worlds in that post and the others is the ridiculousness of collapse postulates. Now I'm not disagreeing with you, collapses would defy a great deal of convention (causality, relativity, CPT-symmetry, etc) but even with 100% confidence in this (as a hypothetical), you still wouldn't be justified in assigning 99+% confidence in many worlds. There exist single world interpretations without a collapse, against which you haven't presented any arguments. Bohmian mechanics would seem to be the most plausible of these (given the LW census). Do you still assign <1% likelihood to this interpretation, and if so, why?
My alternate self very much does exist
Given that many-worlds is true, yes. Invoking it kind of defeats the purpose of the decision theory problem though, as it is meant as a test of reflective consistency (i.e. you are supposed to assume you prefer $100>$0 in this world regardless of any other worlds).
In the mean time, there are strong metaphysical reasons (Occam's razor) to trust MWI over Copenhagen.
Indeed there are, but this is not the same as strong metaphysical reasons to trust MWI over all alternative explanations. In particular, EY argued quite forcefully (and rightly so) that collapse postulates are absurd as they would be the only "nonlinear, non CPT-symmetric, acausal, FTL, discontinuous..." part of all physics. He then argued that since all single-world QM interpretations are absurd (a non-sequitur on his part, as not all single-world QM interpretations involve a collapse), many-worlds wins as the only multi-world interpretation (which is also slightly inaccurate, not that many-minds is taken that seriously around here). Ultimately, I feel that LW assigns too high a prior to MW (and too low a prior to bohmian mechanics).
Would you take the other side of my bet; having limitless resources, or a FAI, or something, would you be willing to bet losing it in exchange for a value roughly equal to that of a penny right now? In fact, you ought to be willing to risk losing it for no gain - you'd be indifferent on the bet, and you get free signaling from it.
Indeed, I would bet the world (or many worlds) that (A→A) to win a penny, or even to win nothing but reinforced signaling. In fact, refusal to use 1 and 0 as probabilities can lead to being money-pumped (or at least exploited, I may be misusing the term "money-pump"). Let's say you assign a 1/10^100 probability that your mind has a critical logic error of some sort, causing you to bound probabilities to the range of (1/10^100, 1-1/10^100) (should be brackets but formatting won't allow it). You can now be pascal's mugged if the payoff offered is greater than the amount asked for by a factor of at least 10^100. If you claim the probability is less than 10^100 due to a leverage penalty or any other reason, you are admitting that your brain is capable of being more certain than the aforementioned number (and such a scenario can be set up for any such number).
So by that logic I should assign a nonzero probability to ¬(A→A). And if something has nonzero probability, you should bet on it if the payout is sufficiently high. Would you bet any amount of money or utilons at any odds on this proposition? If not, then I don't believe you truly believe 100% certainty is impossible. Also, 100% certainty can't be impossible, because impossibility implies that it is 0% likely, which would be a self-defeating argument. You may find it highly improbable that I can truly be 100% certain. What probability do you assign to me being able to assign 100% probability?
Your brain is capable of making mistakes while asserting that it cannot possibly be making a mistake, and there is no domain on which this does not hold.
I must raise an objection to that last point, there are 1 or more domain(s) on which this does not hold. For instance, my belief that A→A is easily 100%, and there is no way for this to be a mistake. If you don't believe me, substitute A="2+2=4". Similarly, I can never be mistaken in saying "something exists" because for me to be mistaken about it, I'd have to exist.
If the goal here is to make a statement to which one can assign probability 1, how about this: something exists. That would be quite difficult to contradict (albeit it has been done by non-realists).
You seem to be ascribing magical properties to one source of randomness.
Free will is not the same as randomness.
What special 'diversity' is being caused by 'free will' that one couldn't get by, say, cutting back a little bit on DNA repair and error-checking mechanisms? Or by amplifying thermal noise? Or by epileptic fits?
Diversity that each individial agent is free to optimize.
If we assume being reactionary to one's environment is purely advantageous (with no negative effects when taken to the extreme), then yes it would have died out (theoretically). However, freedom to deviate creates diversity (among possibly other advantageous traits) and over-adaptation to one's environment can cause a species to "put all its eggs in one basket" and eventually become extinct.
Ultimately, I think what this question boils down to is whether to expect "a sample" or "a sample within which we live" (i.e. whether or not the anthropic argument applies). Under MWI, anthropics would be quite likely to hold. On the other hand, if there is only a single world, it would be quite unlikely to hold (as you not living is a possible outcome, whether you could observe it or not). In the former case, we've received no evidence that MAD works. In the latter, however, we have received such evidence.
I propose a variation of fairbot, let's call it two-tiered fairbot (TTF).
If the opponent cooperates iff I cooperate, cooperate
else, if the opponent cooperates iff (I cooperate iff the opponent cooperates), check to see if the opponent cooperates
and cooperate iff he/she does*
else, defect.
It seems to cooperate against any "reasonable" agents, as well as itself (unless there's something I'm missing) while defecting against cooperatebot. Any thoughts?
*As determined by proof check.