The Presumptuous Philosopher's Presumptuous Friend

post by PlaidX · 2009-10-05T05:26:23.736Z · LW · GW · Legacy · 82 comments

Contents

82 comments

One day, you and the presumptuous philosopher are walking along, arguing about the size of the universe, when suddenly Omega jumps out from behind a bush and knocks you both out with a crowbar. While you're unconscious, she builds two hotels, one with a million rooms, and one with just one room. Then she makes a million copies of both of you, sticks them all in rooms, and destroys the originals.

You wake up in a hotel room, in bed with the presumptuous philosopher, with a note on the table from Omega, explaining what she's done.

"Which hotel are we in, I wonder?" you ask.

"The big one, obviously" says the presumptuous philosopher. "Because of anthropic reasoning and all that. Million to one odds."

"Rubbish!" you scream. "Rubbish and poppycock! We're just as likely to be in any hotel omega builds, regardless of the number of observers in that hotel."

"Unless there are no observers, I assume you mean" says the presumptuous philosopher.

"Right, that's a special case where the number of observers in the hotel matters. But except for that it's totally irrelevant!"

"In that case," says the presumptuous philosopher, "I'll make a deal with you. We'll go outside and check, and if we're at the small hotel I'll give you ten bucks. If we're at the big hotel, I'll just smile smugly."

"Hah!" you say. "You just lost an expected five bucks, sucker!"

You run out of the room to find yourself in a huge, ten thousand story attrium, filled with throngs of yourselves and smug looking presumptuous philosophers.

82 comments

Comments sorted by top scores.

comment by CannibalSmith · 2009-10-05T07:46:39.612Z · LW(p) · GW(p)

Replies from: CronoDAS, Aurini, Tyrrell_McAllister
comment by CronoDAS · 2009-10-05T09:13:47.153Z · LW(p) · GW(p)

/me is confused by this picture

comment by Aurini · 2009-10-06T00:44:46.609Z · LW(p) · GW(p)

"Well played, clerks... well played." slow clap ~Leonardo Leonardo

comment by Tyrrell_McAllister · 2009-10-05T18:23:24.657Z · LW(p) · GW(p)

Whose face is the smug one?

Replies from: CannibalSmith
comment by CannibalSmith · 2009-10-06T11:23:47.602Z · LW(p) · GW(p)

http://images.google.com/images?q=smug
Second result.

Replies from: None
comment by [deleted] · 2009-10-21T17:36:36.795Z · LW(p) · GW(p)

Or, in case it ever stops being the second result (which, actually, it has): http://rndm.files.wordpress.com/2006/11/smug404.jpg

comment by ata · 2009-10-05T19:37:53.947Z · LW(p) · GW(p)

I don't think this requires anthropic reasoning.

Here is a variation on the story:

One day, you and the presumptuous philosopher are walking along, arguing about the size of the universe, when suddenly Omega jumps out from behind a bush and knocks you both out with a crowbar. While you're unconscious, she builds a hotel with 1,000,001 rooms. Then she makes a million copies of both of you, sticks them all in rooms, and destroys the originals.

You wake up in a hotel room, in bed with the presumptuous philosopher, with a note on the table from Omega, explaining what she's done.

"Which room are we in, I wonder?" you ask.

"Any of them is equally likely," says the presumptuous philosopher. "Because it's bloody obvious and all that. Million to one odds for any given room."

"Rubbish!" you scream. "Rubbish and poppycock! We have a 50% chance of being in room 870,199, and a 50% chance of being in one of the other rooms."

After the presumptuous philosopher stands in baffled silence for a moment, he says, "In that case, I'll make a deal with you. We'll go outside and check, and if we're in room 870,199 I'll give you ten bucks. If we're in one of the other rooms, I'll just smile smugly."

"Hah!" you say. "You just lost an expected five bucks, sucker!"

You run out of the room to find yourself surrounded by throngs of yourselves and smug looking presumptuous philosophers; you turn around and look at your door, labeled 129,070.

If I'm not mistaken (am I?), this version of the story is exactly isomorphic to PlaidX's original version; the only difference is that it's easier to see why the friend is wrong before you get to the end.

To anyone who agrees with the friend in the original story -- that the most reasonable estimate is that there is an even chance of being in either hotel -- would you disagree that this version is isomorphic to the original?

Replies from: PlaidX
comment by PlaidX · 2009-10-05T19:53:13.608Z · LW(p) · GW(p)

I thought of this, but then, in the other direction, is the problem non-isomorphic to the original presumptuous philosopher problem? If so, why?

Is it because I used hotels instead of universes? Is it because the existence of both hotels has probability 100% instead of probability 50%? Is it some other thing?

Replies from: Nubulous
comment by Nubulous · 2009-10-06T07:15:49.378Z · LW(p) · GW(p)

The most obvious difference is that the original problem involved the smaller or the larger set of people whereas this one uses the smaller and the larger.

Replies from: PlaidX
comment by PlaidX · 2009-10-06T08:52:10.463Z · LW(p) · GW(p)

Ah, so the difference isn't that I used hotels instead of universes, it's that I used hotels instead of POSSIBLE hotels. In other words, your likelihood of being in a hotel depends on the number of "you"s in the hotel, but your likelihood of being in a possible hotel does not, is that what you're saying?

Unless the number of "you"s is zero. Then it clearly does depend on the number. Isn't this just packing and unpacking?

Replies from: Nubulous
comment by Nubulous · 2009-10-06T13:08:52.541Z · LW(p) · GW(p)

You're reading a little more into what I said than was actually there. I was just remarking on the change of dependence between the parts of the problem, without having thought through what the consequences would be.

Now that I have thought it through, I agree with the presumptuous philosopher in this case. However I don't agree with him about the size of the universe. The difference being that in the hotel case we want a subjective probability, whereas in the universe case we want an objective one. Subjectively, there's a very high probability of finding yourself in a big universe/hotel. But subjective probabilities are over subjective universes, and there are very very many subjective large universes for the one objective large universe, so a very high subjective probability of finding yourself in a large universe doesn't imply a large objective probability of being found in one.

Replies from: PlaidX
comment by PlaidX · 2009-10-06T22:35:54.708Z · LW(p) · GW(p)

I don't understand what you mean by subjective and objective probabilities. Would you still agree with the philosopher in my problem if omega flipped a coin (or looked at binary digit 5000 of pi) and then built the small hotel OR the big hotel?

Replies from: Nubulous
comment by Nubulous · 2009-10-08T00:43:05.574Z · LW(p) · GW(p)

I don't know what I meant either. I remember it making perfect sense at the time, but that was after 35 hours without sleep, so.....

The answer to the second part is no, I would expect a 50:50 chance in that case.
In case you were thinking of this as a counterexample, I also expect a 50:50 chance in all the cases there from B onwards. The claim that the probabilities are unchanged by the coin toss is wrong, since the coin toss changes the number of participants, and we already accepted that the number of participants was a factor in the probability when we assigned the 99% probability in the first place.

Replies from: PlaidX
comment by PlaidX · 2009-10-08T03:08:37.112Z · LW(p) · GW(p)

So, if omega picks a number from 1 to 3, and depending on the result makes:

A. a hotel with a million rooms

B. a hotel with one room

C. a pile of flaming tires

you'd say that a person has a 50% chance of finding themselves in situation A or B, but a 0% chance of being in C?

Why does the number of people only matter when the number of people is zero? Doesn't that strike you as suspicious?

Replies from: Nubulous
comment by Nubulous · 2009-10-10T23:39:22.587Z · LW(p) · GW(p)

When we speak of a subjective probability in a person-multiplying experiment such as this, we (or at least, I) mean "The outcome ratio experienced by a person who was randomly chosen from the resulting population of the experiment, then was used as the seed for an identical experiment, then was randomly chosen from the resulting population, then was used as the seed.... and so forth, ad infinitum".

I'm not confident that we can speak of having probabilities in problems which can't in theory be cast in this form.

In other words, the probability is along a path. When you look at the problem this way, it throws some light on why there are two different arguable values for the probability. If you look back along the path, ("what ratio will our person have experienced") the answer in your experiment is 1000000:1. If you look forward along the path, ("what ratio will our person experience") the answer is 1:1 (in the flaming-tires case there's no path, so there's no probability).

Replies from: PlaidX
comment by PlaidX · 2009-10-11T04:27:29.972Z · LW(p) · GW(p)

But again I must ask, on the going-forward basis, why is the number of people in each world irrelevant? I grant you that the WORLD splits into even thirds, but the people in it don't, they split 1000000 / 1 / 0. Where are you getting 1 / 1 / 0?

Replies from: Nubulous
comment by Nubulous · 2009-10-11T05:13:58.991Z · LW(p) · GW(p)

Because if you agree that the correct way to measure the probability is as the occurrence ratio along the path, the degree of splitting is only significant to the extent that it affects the occurrence ratio, which in this case it doesn't. The coin toss chooses equiprobably which hotel comes next, then it's on to the next coin toss to equiprobably choose which hotel comes next, and so forth. So each path has on average equal numbers of each hotel, going forwards.

Replies from: PlaidX
comment by PlaidX · 2009-10-13T02:51:28.015Z · LW(p) · GW(p)

But you're not a hotel, you're an observer. Why does the number of hotels matter but not the number of observers? If the tire fire is replaced with an empty hotel, you still can't end up in it.

It seems like your function for ending up in a future, based on the number of observers in that future, goes as follows:

If there's zero, the prior likelihood gets multiplied by zero.

If there's one, the prior likelihood gets multiplied by one.

If there's more than one, the prior likelihood still only gets multiplied by one.

This function seems more complicated than just multiplying the prior probability by the number of observers, which is what I do. My reasoning is, even on a going forward basis, if there's a line connecting me to a world with one future self, and no line connecting me to a world without a future self, there must be 14 lines connecting me to a future with 14 future selves.

Is there some reason to prefer your going-forward interpretation over mine, despite the fact that mine is simpler and agrees with the going-backwards perspective?

comment by Chris_Leong · 2018-06-26T13:36:23.528Z · LW(p) · GW(p)

One difference between this and universes is that you can't be in two hotels, but you might be able to exist in two different models of the universe.

comment by wedrifid · 2009-10-05T12:34:14.163Z · LW(p) · GW(p)

You run out of the room to find yourself in a huge, ten thousand story attrium, filled with throngs of yourselves and smug looking presumptuous philosophers.

One of the other copies just got $10 bucks, you lost nothing. Nice work bluffing your presumptuous friend and pumping his ego for (a chance at) cash. I just hope you think things through a bit more thoroughly if you have to lay cash on the line. Or that you have good reason to be valuing the outcome of the one copy equal to that of the million in the other hotel.

This is a trivial problem that need not be confusing unless you want to be confused.

ETA: No offence to PlaidX. On similar topics Eliezer has appeared to me to want to be confused!

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2009-10-06T18:06:06.185Z · LW(p) · GW(p)

I wouldn't want to endure a million smug "told you so" smiles for $10. Think dust specks.

Replies from: wedrifid
comment by wedrifid · 2009-10-06T19:28:46.116Z · LW(p) · GW(p)

And miss watching 1,000,000 presumptuous philosophers flummoxed when the only response they get is a look of condescending superiority? I don't think so!

comment by taw · 2009-10-05T05:53:54.094Z · LW(p) · GW(p)

I wonder... could we please use Omega less often unless absolutely required? (and if absolutely required it strongly suggests something is wrong with the story anyway)

Replies from: CannibalSmith, PlaidX, wedrifid
comment by CannibalSmith · 2009-10-05T07:03:58.402Z · LW(p) · GW(p)

Not a chance.

comment by PlaidX · 2009-10-05T06:08:47.621Z · LW(p) · GW(p)

I used omega because it makes things tidier. I think it's important for a thought experiment to be tidy, but not very important for it to be realistic.

Also it's funny.

Replies from: taw
comment by taw · 2009-10-05T08:18:09.283Z · LW(p) · GW(p)

My problem is experiments like Newcomb in which Omega is used to break causality, and make absolutely no sense; and experiments like this which are really in every way equivalent to "being moved to a random room", look too similar.

Replies from: Vladimir_Nesov, wedrifid, Jack, PlaidX
comment by Vladimir_Nesov · 2009-10-05T15:46:16.483Z · LW(p) · GW(p)

It doesn't break causality. Newcomb's problem (especially if you move the victim to a deterministic substrate) can very well be set up in the real world. It just can't be currently done because of limitations of technology.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-05T18:12:01.290Z · LW(p) · GW(p)

Well, what do you mean by "setting it up in the real world"? There are certainly versions that can be done on computer (and I'm not sure if you were counting these, so don't take this as a criticism).

-Write an algorithm A1 for picking whether to one-box or two-box on the problem.

-Write an algorithm A2 for predicting whether a given algorithm will one-box or two-box, and then fill the box as per Omega.

-Run a program in which A2 acts on A1, and then A1 runs, and find A1's payoff.

Eliezer_Yudkowsky even claimed that this implementation of Newcomb's problem makes it even clearer why you should use Timeless Decision Theory.

comment by wedrifid · 2009-10-05T12:28:18.315Z · LW(p) · GW(p)

Omega doesn't break causality in Newcomb. It is merely a chain of causality which is entirely predictable.

Replies from: taw
comment by taw · 2009-10-05T12:53:07.923Z · LW(p) · GW(p)

Yes it does. It makes decision in the past that depends on your decision in the future, and your decision in the future can assume Omega has already decided in the past. That's a causality loop.

Newcomb is a completely bogus problem.

Replies from: Jonathan_Graehl, Vladimir_Nesov, wedrifid
comment by Jonathan_Graehl · 2009-10-05T19:09:09.157Z · LW(p) · GW(p)

Is the taw-on-Newcomb downvoting happening because he's speaking against what's considered settled fact?

comment by Vladimir_Nesov · 2009-10-05T16:12:34.525Z · LW(p) · GW(p)

It's only a loop in imaginary Platonia. In the real world, laws of physics don't notice that there's a "loop". One way to see the problem is as a situation that demonstrates failure to adequately account for the real world with the semantics usually employed to think about it.

Replies from: Jonathan_Graehl, Tyrrell_McAllister
comment by Jonathan_Graehl · 2009-10-05T19:09:43.159Z · LW(p) · GW(p)

Too opaque.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-05T19:16:35.487Z · LW(p) · GW(p)

Alas, yes. I'm working on that.

comment by Tyrrell_McAllister · 2009-10-05T18:28:37.407Z · LW(p) · GW(p)

If it's a loop in Platonia, then all causation happens in Platonia. If any causation can be said to happen in the real world, then real causation is happening backwards in time in the Newcomb scenario.

But I, for one, have no problem with that. All causal processes observed so far have run in the same temporal direction. But there's no reason to rule out a priori the possibility of exceptions.

ETA: Nor to rule out loops.

Replies from: brianm
comment by brianm · 2009-10-06T12:01:28.350Z · LW(p) · GW(p)

I don't see why Newcombe's paradox breaks causality - it seems more accurate to say that both events are caused by an earlier cause: your predisposition to choose a particular way. Both Omega's prediction and your action are caused by this predisposition, meaning Omega's prediction is merely correlated with, not a cause of, your choice.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2009-10-06T15:11:16.131Z · LW(p) · GW(p)

It's commonplace for an event A to cause an event B, with both sharing a third antecedent cause C. (The bullet's firing causes the prisoner to die, but the finger's pulling of the trigger causes both.) Newcomb's scenario has the added wrinkle that event B also causes event A. Nonetheless, both still have the antecedent cause C that you describe.

All of this only makes sense under the right analysis of causation. In this case, the right analysis is a manipulationist one, such as that given by Judea Pearl.

Replies from: brianm
comment by brianm · 2009-10-07T11:44:21.684Z · LW(p) · GW(p)

Newcomb's scenario has the added wrinkle that event B also causes event A

I don't see how. Omega doesn't make the prediction because you made the action - he makes it because he can predict that a person of a particular mental configuration at time T will make decision A at time T+1. If I were to play the part of Omega, I couldn't achieve perfect prediction, but might be able to achieve, say, 90% by studying what people say they will do on blogs about Newcombe's paradox, and performing observation as to what such people actually do (so long as my decision criteria weren't known to the person I was testing).

Am I violating causality by doing this? Clearly not - my prediction is caused by the blog post and my observations, not by the action. The same thing that causes you to say you'd decide one way is also what causes you to act one way. As I get better and better, nothing changes, nor do I see why something would if I am able to simulate you perfectly, achieving 100% accuracy (some degree of determinism is assumed there, but then it's already in the original thought experiment if we assume literally 100% accuracy).

Assuming I'm understanding it correctly, the same would be true for a manipulationist definition. If we can manipulate your mental state, we'd change both the prediction (assuming Omega factors in this manipulation) and the decision, thus your mental state is a cause of both. However if we could manipulate your action without changing the state that causes it in a way that would affect Omega's prediction, our actions would not change the prediction. In practice, this may be impossible (it requires Omega not to factor in our manipulation, which is contradicted by assuming he is a perfect predictor), but in principle it seems valid.

comment by wedrifid · 2009-10-05T13:00:53.426Z · LW(p) · GW(p)

He makes a prediction based on the nearby state of the universe that you model with an accuracy that approaches 1. If your mathematician can't handle that then find a better mathematician.

I shall continue to find Omega useful.

ETA: The part of the Newcomb problem that is actually hard to explain is that I am somehow confident that Omega is being truthful.

Replies from: taw
comment by taw · 2009-10-05T16:01:19.095Z · LW(p) · GW(p)

Any accuracy better than random coin toss breaks causality. Prove it otherwise if you can, but many tried before you and all failed.

Replies from: wedrifid, Jonathan_Graehl
comment by wedrifid · 2009-10-05T18:03:33.397Z · LW(p) · GW(p)

Billions of social encounters involving better than even predictions of similar choices every day make a mockery of this claim.

That is without even invoking a Jupiter brain executing Bayes rule.

comment by Jonathan_Graehl · 2009-10-05T19:21:55.501Z · LW(p) · GW(p)

At first I was going to vote this up to correct the seemingly unfair downvoting against you in this thread, but this particular comment seems both wrong and ill-explained. I'd prefer to have your reasons than your assurances.

Recall that Newcomb's "paradox" has a payout (when Omega is always right) of $1000k for the 1-boxer, and $1k for the 2-boxer. But if Omega is correct with only p=.500001 then I should always 2-box.

I do agree that there is some 1>p>.5 where the idea of Omega having a belief of "what I will choose", that's correct with probability p, is just as troubling as if p=1.

Replies from: taw
comment by taw · 2009-10-06T10:42:38.749Z · LW(p) · GW(p)

By trivial argument (of the kind employed in algorithm complexity analysis and cryptography) that you can just toss a coin or do mental equivalent of it, any guaranteed probability nontrivially >.5, even by a ridiculously small margin, is impossible to achieve. Probability against a random human is entirely irrelevant - what Omega must do is probability against the most uncooperative human being nontrivially >.5, as you can choose to be maximally uncooperative if you wish to.

If we force determinism (what is cheating already), disable free will (as in ability to freely choose our answer only at the point we have to), and let Omega see our brain, it basically means that we have to decide before Omega, and have to tell Omega what we decided, what reverses causality, and collapses it into "Choose 1 or 2 boxes. Based on your decision Omega chooses what to put in them".

From the linked Wikipedia article:

More recent work has reformulated the problem as a noncooperative game in which players set the conditional distributions in a Bayes net. It is straight-forward to prove that the two strategies for which boxes to choose make mutually inconsistent assumptions for the underlying Bayes net. Depending on which Bayes net one assumes, one can derive either strategy as optimal. In this there is no paradox, only unclear language that hides the fact that one is making two inconsistent assumptions.

Some argue that Newcomb's Problem is a paradox because it leads logically to self-contradiction. Reverse causation is defined into the problem and therefore logically there can be no free will. However, free will is also defined in the problem; otherwise the chooser is not really making a choice.

That's basically it. It's ill-defined, and any serious formalization collapses it into either "you choose first, so one box", or "Omega chooses first, so two box" trivial problems.

Replies from: wedrifid, Jonii, wedrifid, Jonathan_Graehl
comment by wedrifid · 2009-10-06T20:20:47.134Z · LW(p) · GW(p)

Probability against a random human is entirely irrelevant - what Omega must do is probability against the most uncooperative human being nontrivially >.5, as you can choose to be maximally uncooperative if you wish to.

The limit of how uncooperative you can be is determined by how much information can be stored in the quarks from which you are constituted. Omega can model these. Your recourse of uncooperativity is for your entire brain to be balanced such that your choice depends on quantum uncertainty. Omega then treats you the same way he treats any other jackass who tries to randomize with a quantum coin.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-06T21:38:49.557Z · LW(p) · GW(p)

Geez! When did flipping a (provably) fair coin when faced with a tough dilemma, start being the sole domain of jackasses?

Replies from: SilasBarta
comment by SilasBarta · 2009-10-06T21:44:13.075Z · LW(p) · GW(p)

Geez! When did questioning the evilness of flipping a fair coin when faced with a tough dilemma, start being a good reason to mod someone down? :-P

Replies from: wedrifid
comment by wedrifid · 2009-10-06T21:56:26.045Z · LW(p) · GW(p)

Don't know. I was planning to just make a jibe at your exclusivity logic (some jackasses do therefore all who do...).

Make that two jibes. Perhaps the votes were actually a cringe response at the comma use. ;)

Replies from: SilasBarta
comment by SilasBarta · 2009-10-06T22:00:29.451Z · LW(p) · GW(p)

Well, you did kinda insinuate that flipping a coin makes you a jackass, which is kind of an extreme reaction to an unconventional approach to Newcomb's problem :-P

Replies from: wedrifid
comment by wedrifid · 2009-10-06T22:25:53.629Z · LW(p) · GW(p)

;) I'd make for a rather harsh Omega. If I was dropping my demi-divine goodies around I'd make it quite clear that if I predicted a randomization I'd booby trap the big box with a custard pie jack-in-a-box trap.

If I was somewhat more patient I'd just apply the natural extension, making the big box reward linearly dependent on the probabilities predicted. Then they can plot a graph of how much money they are wasting per probability they assign to making the stupid choice.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-06T22:40:41.976Z · LW(p) · GW(p)

I'd make for a rather harsh Omega. If I was dropping my demi-divine goodies around I'd make it quite clear that if I predicted a randomization I'd booby trap the big box with a custard pie jack-in-a-box trap.

Wow, they sure are right about that "power corrupts" thing ;-)

Replies from: wedrifid
comment by wedrifid · 2009-10-06T23:38:40.254Z · LW(p) · GW(p)

Power corrupts. Absolute power corrupts... comically?

comment by Jonii · 2009-10-06T22:38:20.408Z · LW(p) · GW(p)

Free will is not imcompatible with Omega predicting your actions(Well, unless Omega delibrately manipulates your actions based on that predictive power, but it's outside the scope of this paradox), and Omega doesn't even need to see inside your head, just observe your behavior until it thinks it can predict your decisions with high accuracy only based on the input that you have received. Omega doesn't need to be 100% accurate anyway, only do better than 99,9%. Determinism is also not required, for the same reason.(Though yeah, you could toss a quantum coin to make the odds 50-50, but this seems a bit uninteresting case. Omega could predict that you select either at 50% chance, and thus you lose on average about 750 000$ every time Omega makes this offer to you.)

So basically, where we disagree is that Omega can choose first and you'd still have to one-box, since Omega knows the factors that correlate with your decision at high probability. Without breaking causality, without messing with free will and even without requiring determinism.

comment by wedrifid · 2009-10-06T20:06:58.346Z · LW(p) · GW(p)

"Omega chooses first, so two box" trivial problems.

Yes. Omega chooses first. That's Newcomb's. The other one isn't.

It seems that the fact that both my decision and Omega's decision are determined (quantum acknowledged) by the earlier state of the universe utterly bamboozles your decision theory. Since that is in fact how this universe works your decision theory is broken. It is foolish to define a problem as 'ill-defined' simply because your decision theory can't handle it.

The current state of my brain influences both the decisions I will make in the future and the decisions other agents can make based on what they can infer of my from their observations. This means that intelligent agents will be able to predict my decisions better than a coin flip. In the case of superintelligences they can get a lot better than than 0.5.

Just how much money does Omega need to put in the box before you are willing to discard 'Serious' and take the cash?

comment by Jonathan_Graehl · 2009-10-06T17:59:48.032Z · LW(p) · GW(p)

Thanks for the explanation. I think that if the right decision is always to 2-box (which it is if Omega is wrong 1/2-epsilon of the time), then all Omega has to do is flip a biased coin, and choose for the more likely alternative to believe that I 2-box. But I guess you disagree.

There's a true problem if you require Omega to make a deterministic decision; it's probably impossible to even postulate he's right with some specific probability. Maybe that's what you were getting at.

comment by Jack · 2009-10-05T22:11:58.538Z · LW(p) · GW(p)

For a bunch of people with what seems to be a Humean suspicion of metaphysics "causation" sure comes up a lot. If you think that causation is just a psychological projection onto constantly conjoined events then it isn't clear what the paradox here is.

Replies from: ata, taw
comment by ata · 2009-10-05T22:32:08.251Z · LW(p) · GW(p)

There are non-metaphysical treatments of causality. I'm not sure if any particular interpretations are favoured around here, but they build on Bayes and they work. (I have yet to read it, but I've heard good things about Judea Pearl's Causality.)

It's a "psychological projection" inasmuch as probability itself is, but as with probability, that doesn't mean it's never a useful concept, as long as it's understood in the correct light.

Replies from: Jack
comment by Jack · 2009-10-05T23:12:14.004Z · LW(p) · GW(p)

Sure. But,

  1. The way I see causal language being used doesn't suggest to me a demystified understanding of causality.

  2. Maybe I'm being dense but it seems to me a non-metaphysical account of causality won't a priori exclude backwards causation and causality loops. In other words, even if we allow some kind of deflated causality that won't mean Newcomb's problem "makes no sense".

Replies from: ata
comment by ata · 2009-10-05T23:44:30.463Z · LW(p) · GW(p)

Oh, I wasn't agreeing with taw on that. Just responding to your association of causation with metaphysics. I don't see Omega breaking any causality, whether in a metaphysical or statistical sense.

As for excluding backwards causation and causality loops -- I'm not sure why we should necessarily want to exclude them, if a given system allows them and they're useful for explaining or predicting anything, even if they go against our more intuitive notions of causality. I was just recently thinking that backwards causality might be a good way to think about Newcomb's problem. (That idea might go down in flames, but I think the point stands that backward/cyclical causality should be allowed if they're found to be useful.)

Replies from: Jack
comment by Jack · 2009-10-05T23:58:24.722Z · LW(p) · GW(p)

I think we agree down the line.

comment by taw · 2009-10-06T10:28:28.640Z · LW(p) · GW(p)

I meant causation in purely physical sense. Disregarding complexity of quantum-ness, Omega can't do that as you get time loops.

Replies from: Jack
comment by Jack · 2009-10-06T15:50:43.190Z · LW(p) · GW(p)

I meant causation in purely physical sense.

I don't know what that means. Our most basic physics makes no mention of causation or even objects. There are just quantum fields with future states that can be predicted if you have knowledge of earlier states and the right equations. And no matter what "causation in a purely physical sense" means I have no idea why it prohibits an event at time t1 (Omega's predictions) from necessarily coinciding with an event at t2 (your decision).

comment by PlaidX · 2009-10-05T09:29:09.318Z · LW(p) · GW(p)

You can do both this experiment and newcomb without omega, or at least, you can start with a similar, but messier setup and bridge it to the tidy omega version using reasonable steps. But the process is very tedious.

Replies from: taw
comment by taw · 2009-10-05T10:49:01.555Z · LW(p) · GW(p)

Past discussions indicate quite conclusively that Newcomb is completely unmathematizable as a paradox. Every mathematization becomes trivial one was or the other, and resolves causality loop caused by Omega.

If problems with Omega can be pathological like that, it's a good argument to avoid using Omega unless absolutely necessary (in which case you can rethink if problem is even well stated).

Replies from: wedrifid
comment by wedrifid · 2009-10-05T14:46:55.092Z · LW(p) · GW(p)

Every mathematization becomes trivial

I would be shocked if it didn't. It's a trivial problem.

Replies from: taw
comment by taw · 2009-10-05T16:00:15.571Z · LW(p) · GW(p)

Trivial how? Depending on mathematization it collapses to either one-boxing, or two-boxing, depending on how we break the causality loop.

If you decide first, trivially one-box. If Omega decides first, trivially two-box. If you have causality loop, your problem doesn't make any sense.

comment by wedrifid · 2009-10-05T12:26:39.057Z · LW(p) · GW(p)

and if absolutely required it strongly suggests something is wrong with the story anyway

No it doesn't. It suggests that care is being taken to remove irrelevant details and prevent irritating technicalities.

Replies from: taw
comment by wedrifid · 2009-10-05T12:38:37.922Z · LW(p) · GW(p)

While you're unconscious, she builds two hotels, one with a million rooms, and one with just one room. Then she makes a million copies of both of you, sticks them all in rooms, and destroys the originals.

I feel... thin. Sort of stretched, like... butter scraped over too much bread.

comment by SilasBarta · 2009-10-05T16:19:57.563Z · LW(p) · GW(p)

Why do we spend so much time thinking about how to reason on problems in which

a) you know what's going on while you're not conscious, and

b) you take at face value information fed to you by a hostile entity?

Replies from: jimmy
comment by jimmy · 2009-10-05T17:19:52.611Z · LW(p) · GW(p)

Because it's much simpler that way, and you need to be able to handle trivial cases before you can deal with more complicated ones.

Besides, what is hostile about making a million copies of you? I'd take getting knocked out for that, as long as the copies don't all have brain damage for it.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-05T17:50:24.704Z · LW(p) · GW(p)

Okay, fair point. It is indeed important to start from simple cases. I guess I didn't say what I really meant there.

My real concern is this: posters are trying to develop the limits of e.g. anthropic reasoning. Anthropic reasoning takes the form of, "I observe that I exist. Therefore, it follows that..."

But then to attack that problem, they posit scenarios of a completely different form: "I have been fed solid evidence from elsewhere that {x, y, and z} and then placed in {specific scenario}. Then I observe E. What should I infer?"

That does not generalize to anthropic reasoning: it's just reasoning from arbitrarily selected premises.

Replies from: jimmy, wedrifid
comment by jimmy · 2009-10-05T22:06:24.567Z · LW(p) · GW(p)

I figured that wasn't your real objection, but I guessed wrong about what it was.

I figured you were going for something like "you need to include sufficient information so that we know we're not positing an impossible world", which is a fair point, since, for example, at first glance newcombs problem appears to violate causality.

Are you suggesting that we deal with more general problems where we know even less, or are you just saying that these problems aren't even related to anthropic reasoning?

Replies from: SilasBarta
comment by SilasBarta · 2009-10-05T22:33:37.456Z · LW(p) · GW(p)

are you just saying that these problems aren't even related to anthropic reasoning?

This. This is what I'm saying.

These posts I'm referring to start out with "Assume you're in a situation where [...]. And you know that that's the situation. Then what you can you infer from evidence E?"

But when you do that, there's nothing anthropic about that -- it's just a usual logical puzzle, unrelated to reasoning about what you can know from your existence in this universe.

Replies from: PlaidX
comment by PlaidX · 2009-10-05T22:41:38.768Z · LW(p) · GW(p)

Do you consider the original presumptuous philosopher problem to involve anthropic reasoning? What is it that's required to be undefined for reasoning to be anthropic?

Replies from: SilasBarta
comment by SilasBarta · 2009-10-06T01:13:32.543Z · LW(p) · GW(p)

Anthropic reasoning is any reasoning based on the fact that you (believe you) exist, and any condition necessary for you to reach that state, including suppositions about what such conditions include. It can be supplemented by observations of the world as it is.

In this problem, most of the problems that purport to use anthropic reasoning, and the original presumptuous philosopher problems, they are just reasoning from arbitrary givens, which don't even generalize to anthropic reasoning. Each time, someone is able to point out a problem isomorphic to the one given, but lacking a characteristically anthropic component to the reasoning.

Anthropic reasoning is simply not the same as "hey, what if someone did this to you, where these things had this frequency, what would you conclude upon seeing this?" That's just a normal inference problem.

Just to show that I'm being reasonable, here is what I would consider a real case of anthropic reasoning.

"I notice that I exist. The noticer seems to be the same as that which exists. So, whatever the computational process is for generating my observations must either permit self-reflection, or the thing I notice existing isn't really the same thing having these thoughts."

Replies from: PlaidX
comment by PlaidX · 2009-10-06T04:52:15.527Z · LW(p) · GW(p)

Each time, someone is able to point out a problem isomorphic to the one given, but lacking a characteristically anthropic component to the reasoning.

To me, that just indicates that anthropic reasoning is valid, or at least that what we're calling anthropic reasoning is valid.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-06T15:24:02.975Z · LW(p) · GW(p)

Well, that just means that you're doing ordinary reasoning, of which anthropic reasoning is a subset. It does not follow that this (and topics like it) is anthropic reasoning. And no, you don't get to define words however you like: the term "anthropic reasoning" is supposed to carve out a natural category in conceptspace, yet when you use it to mean "any reasoning from arbitrary premises", you're making the term less helpful.

Replies from: PlaidX
comment by PlaidX · 2009-10-06T22:40:25.099Z · LW(p) · GW(p)

the term "anthropic reasoning" is supposed to carve out a natural category in conceptspace

If it doesn't carve out such a category, maybe that's because it's a malformed concept, not because we're using it wrong. Off the top of my head, I see no reason why the existence of the observer should be a special data point that needs to be fed into the data processing system in a special way.

Replies from: SilasBarta
comment by SilasBarta · 2009-10-06T22:46:47.757Z · LW(p) · GW(p)

Strangely enough, that's actually pretty close to what I believe -- see my comment here.

So, despite all this arguing, we seem to have almost the same view!

Still, given that it's a malformed concept, you still need to remain as faithful as possible to what it purports to mean, or at least note that your example can be converted into a clearly non-anthropic one without loss of generality.

Replies from: PlaidX
comment by PlaidX · 2009-10-07T04:26:52.652Z · LW(p) · GW(p)

Fair enough!

comment by wedrifid · 2009-10-05T18:47:56.005Z · LW(p) · GW(p)

That does not generalize to anthropic reasoning: it's just reasoning from arbitrarily selected premises.

Which is interesting enough, so long as I only have to write trivial replies and not waste time writing up the trivial scenarios! (You make a good point.)

comment by Psychohistorian · 2009-10-06T07:53:45.323Z · LW(p) · GW(p)

This entire theoretical framework is based on the assumption that "she makes a million copies of both of you, sticks them all in rooms, and destroys the originals" is meaningfully possible, which it may not be, and that it would result in a "you" that is somehow continuous, which is not clear, and may not be experimentally verifiable.

And of course, if you ever encountered an Omega hypothetical in real life, you'd decide that "He's lying" has P~=1. Perhaps that's why Omega keeps getting used; all Omega hypotheticals have that property in common, I believe.