Omega and self-fulfilling prophecies

post by Richard_Kennaway · 2011-03-19T17:23:09.303Z · LW · GW · Legacy · 19 comments

Omega appears to you in a puff of logic, and presents you with a closed box. "If you open this box you will find either nothing or a million dollars," Omega tells you, "and the contents will be yours to keep." "Great," you say, taking the box, "sounds like I can't lose!" "Not so fast," says Omega, "to get that possible million dollars you have to be in the right frame of mind. If you are at least 99% confident that there's a million dollars in the box, there will be. If you're less confident than that, it will be empty. I'm not predicting the state of your mind in advance this time, I'm reading it directly and teleporting the money in only if you have enough faith that it will be there. Take as long as you like."

Assume you believe Omega.  Can you believe the million dollars will be there, strongly enough that it will be?

 

19 comments

Comments sorted by top scores.

comment by Davorak · 2011-03-20T00:08:47.425Z · LW(p) · GW(p)

Well I want my million dollars.

So I need to engineer my belief and to do so I will need to replicate the appearance of the box several times over and borrow a million dollars.

I put the borrowed million dollars in the replicated box and open it several times a day perhaps for several months or even a few years. After a set amount of time plus some random variation a friend will switch out my replicated box and borrow million for the box omega gave me. At this point I should have every conscience reason to believe that there is more then 99% chance that there is a million dollars in the box I am opening.

Pay off the interest(if any) on my loan, pay my friend, and then find a productive use with the rest of my money.

comment by Vladimir_Nesov · 2011-03-19T18:57:29.376Z · LW(p) · GW(p)

There seems to be no reason to consider this special case differently from a thought experiment where Omega tells that it'll put the money in the box if you believe that the box will never contain the money (strongly enough). This is a case where the heuristic of correctness is wrong, and whole agent's decision problem tells the belief-symbol to assume a state that is contrary to heuristic of correctness that normally controls it, because that state is better, even if it's less correct.

Edit: Explained in more detail below.

Replies from: jimmy
comment by jimmy · 2011-03-19T20:22:09.048Z · LW(p) · GW(p)

It seems different to me. The difference is that your beliefs come out accurately. It's easier for people (and less complicated in general intelligences) to force beliefs they know are accurate than beliefs they know are false. I can make my hand somewhat numb by believing that my belief will make it numb through some neural pathway. I can't make it numb by believing god will make it numb. Parts of me that I can't control say "But thats not true!"

Also, if you are engaging in self deception you have to work hard to quarantine the problem and correct it after self delusion is no longer necessary. On top of that, if your official belief node is set one way, but the rest of you is set up to flip it back once omega goes away, who's to say it won't decide to ignore your Official Belief node in favor of the fact that you're ready to change it.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-03-19T21:39:31.452Z · LW(p) · GW(p)

If you believe that you'll win a lottery, and you win, it doesn't make your belief correct. Correctness comes from lawful reasoning, not isolated events of conclusions coinciding with reality. A correct belief is one that the heuristics of truth-seeking should assign.

Here, the agent doesn't have reasons to expect that the box will likely contain money, it can't know that apart from deciding it to be so, and the deciding is performed by other heuristics. Of course, after the other heuristics have decided, truth-seeking heuristic would agree that there are reasons to expect the box to contain the money, but that's not what causes the money to be brought into existence, it's a separate event of observing that this fact has been decided by the other heuristics.

First, there is a decision to assign a belief (that money will be in the box) for reasons other than it being true, which is contrary to heuristics of correctness. Second, with that decision in place, there are now reasons to believe that money will be in the box, so the heuristic of correctness can back the belief-assignment, but would have no effect on the fact it's based upon.

So the coincidence here is not significant to the process of controlling content of the box. Rather, it's an incorrect post-hoc explanation of what has happened. The variant where you are required to instead believe that the box will always be empty removes this equivocation and clarifies the situation.

Replies from: jimmy
comment by jimmy · 2011-03-20T18:57:04.731Z · LW(p) · GW(p)

If you believe that you'll win a lottery, and you win, it doesn't make your belief correct. Correctness comes from lawful reasoning, not isolated events of conclusions coinciding with reality. A correct belief is one that the heuristics of truth-seeking should assign.

It's semantics, but in common usage the "correct belief to have given evidence X" is different than the belief "turning out to be correct", and I think its important to have a good word for the latter.

Either way, I said "accurate" and was referring to it matching the territory, not to how it was generated.

First, there is a decision to assign a belief (that money will be in the box) for reasons other than it being true, which is contrary to heuristics of correctness. Second, with that decision in place, there are now reasons to believe that money will be in the box, so the heuristic of correctness can back the belief-assignment, but would have no effect on the fact it's based upon.

That's one way to do it, in which case it is pretty similar to forging beliefs that don't end up matching the territory. The only difference is that you're 'coincidentally' right.

It's also possible to use your normal heuristics to determine what would happen conditional on holding different beliefs (which you have not yet formed). After finding the set of beliefs that are 'correct', choose the most favorable correct belief. This way there is never any step where you choose something that is predictably incorrect. And you don't have to be ready to deal with polluted belief networks.

Am I missing something?

comment by jimmy · 2011-03-19T20:10:24.071Z · LW(p) · GW(p)

How exactly is your "belief" probability determined? This seems like one of the cases where lumping everything as one "belief" does not work.

If you ask for verbal declarations or betting odds, the belief will come out over 99% even if it doesn't feel right.

The subjective anticipation is a different part of your brain, and it doesn't talk in probabilities - you'd need some sort of rough correspondence between strength of anticipation and observed frequencies or something- and I'm not sure how well even that would work.

Like James Miller said, this is a lot like the placebo effect when told that its only the placebo effect. It seems like people can make that work, but that there are complications (say it as if they wont believe in it strongly to work- or that they'd be fooling themselves and it likely wont work. Say it as if it will work and it's virtuous for it to work, and it probably will).

Replies from: PlaidX
comment by PlaidX · 2011-03-21T00:29:48.405Z · LW(p) · GW(p)

I came to the comments section to make this exact post.

comment by ata · 2011-03-19T18:14:51.525Z · LW(p) · GW(p)

I came up with a version of this a while ago (can't remember if I posted it) where Omega is going to (possibly) put a diamond in a box, and it has predicted the probability with which you expect there to be one, and it then uses that probability to decide pseudorandomly whether to put the diamond in the box.

I would say that a self-modifying timeless agent would immediately modify itself to actually anticipate the diamond being there with near-100% probability, open the box, take the diamond, and revert the modification. (Before applying the modification, it would prove that it would revert itself afterwards, of course.) And I would say that a human can't win as easily on this, since for the most part we can't deliberately make ourselves believe things, though we often believe we can. Although this specific type of situation would not literally be self-deception, it would overall be a dangerous ability for a human to permit themselves to use; I don't think I'd want to acquire it, all other things being equal.

comment by [deleted] · 2011-03-23T03:29:37.210Z · LW(p) · GW(p)

.

comment by paulfchristiano · 2011-03-19T18:17:46.610Z · LW(p) · GW(p)

If I believe that my belief in something causes it to be true, then I believe it. At least, I tell myself that Lob's theorem can be applied in this way when its convenient for me.

Replies from: Vladimir_Nesov, AlexMennen
comment by Vladimir_Nesov · 2011-03-20T00:15:36.968Z · LW(p) · GW(p)

The problem statement seems to also indicate that if you believe the box to be empty, it will be empty. By the same argument, you'll believe the box to be empty. But you can't have it both ways.

comment by AlexMennen · 2011-03-20T05:39:55.631Z · LW(p) · GW(p)

Your brain != peano arithmetic.

comment by TheOtherDave · 2011-03-19T22:14:48.369Z · LW(p) · GW(p)

Mm.

So, let me reframe that as, could I be willing to spend $989,000 in order to open the box? In any plausible scenario, my answer is "of course not." But assuming that I believe Omega... hm.

No, I don't think so.

comment by jimrandomh · 2011-03-19T20:01:09.740Z · LW(p) · GW(p)

If you are at least 99% confident that there's a million dollars in the box, there will be. If you're less confident than that, it will be empty.

"Your confidence that there's a million dollars in the box" does not uniquely identify a value, even at a single point in time. The mind has more than one method for representing and calculating likelihoods, some of which are under conscious control and may be overridden for game-theoretic reasons, and some of which aren't.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-03-20T00:22:04.661Z · LW(p) · GW(p)

These are technical issues with the hypothetical, irrelevant to its intended sense.

comment by James_Miller · 2011-03-19T18:03:13.901Z · LW(p) · GW(p)

Similar to the placebo effect.

Replies from: Bobertron
comment by Bobertron · 2011-03-19T18:07:19.486Z · LW(p) · GW(p)

How so?

Replies from: James_Miller
comment by James_Miller · 2011-03-19T18:17:22.323Z · LW(p) · GW(p)

You know that you will receive the benefit only if you believe in the effect.

Replies from: Manfred
comment by Manfred · 2011-03-20T02:03:37.488Z · LW(p) · GW(p)

Probably not true. The placebo effect still works on people told they got a placebo, which is reasonably good evidence that it works whether you consciously believe or not.