Sleeping anti-beauty and the presumptuous philosopher
post by Stuart_Armstrong · 2011-02-17T14:59:59.055Z · LW · GW · Legacy · 12 commentsContents
12 comments
My approach for dividing utility between copies gives the usual and expected solutions to the sleeping beauty problem: if all copies are offered bets, take 1/3 odds, if only one copy is offered bets, take 1/2 odds.
This makes sense, because my approach is analogous to "some future version of Sleeping Beauty gets to keep all the profits".
The presumptuous philosopher problem is subtly different from the sleeping beauty problem. It can best be phrased as sleeping beauty problem where each copy doesn't care for any other copy. Solving this is a bit more subtle, but an useful half-way point is the "Sleeping Anti-Beauty" problem.
Here, as before, one or two copies are created depending on the result of a coin flip. However, if two copies are created, they are the reverse of mutually altruistic: they derive disutility from the other copy achieving its utility. So if both copies receive $1, neither of their utilities increase: they are happy to have the cash, but angry the other copy also has cash.
Apart from this difference in indexical utility, the two copies are identical, and will reach the same decision. Now, as before, every copy is approached with bets on whether they are in the large universe (with two copies) or the small one (with a single copy). Using standard UDT/TDT Newcomb-problem type reasoning, they will always take the small universe side in any bet (as any gain/loss in the large universe is compensated for by the same gain/loss for the other copy they dislike).
Now, you could model the presumptuous philosopher by saying they have 50% chance of being in a Sleeping-Beauty (SB) situation and 50% of being in a Sleeping Anti-Beauty (SAB) situation (indifference modelled as half way between altruism and hate).
There are 4 equally likely possibilities here: small universe in SB, large universe in SB, small universe in SAB, large universe in SAB. A contract that gives $1 in a small universe is worth 0.25 + 0 + 0.25 + 0 = $0.5. While a contract that gives $1 in a large universe is worth 0 + 0.25*2 + 0 + 0 = $0.5 (as long as its offered to everyone). So it seems that a presumptuous philosopher should take even odds on the size of the universe if he doesn't care about the other presumptuous philosophers.
It's no coincidence this result can be reached by UDT-like arguments such as "take the objective probabilities of the universes, and consider the total impact of your decision being X, including all other decision that must be the same as yours". I'm hoping to find more fundamental reasons to justify this approach soon.
12 comments
Comments sorted by top scores.
comment by Manfred · 2011-02-17T15:33:10.655Z · LW(p) · GW(p)
I don't want to go to a university computer and read through a whole journal article to find out what the presumptuous philosopher problem really is. Could you please explain it in full?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-02-17T16:28:15.319Z · LW(p) · GW(p)
If you have a 50% chance of existing in a universe with trillions of trillions observers, and a 50% chance of existing in a universe with merely a trillion of observers, would you take a bet at trillion to one odds that you're in the first one?
The SIA odds seem to imply that you should.
Replies from: Manfred↑ comment by Manfred · 2011-02-17T17:35:05.299Z · LW(p) · GW(p)
Oh, okay. Is there any formal reason to think that modeling indifference as the average of altruism and hate is better than just going with the probabilities (modeling indifference as selfishness)?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-02-17T17:38:39.741Z · LW(p) · GW(p)
There is a huge debate as to what the probabilities should be. I have strong arguments as to what the correct path is for altruism, so I'm trying to extend it into a contentious area.
Replies from: Manfred↑ comment by Manfred · 2011-02-17T19:03:03.013Z · LW(p) · GW(p)
Seems too arbitrary to succeed. "Anti-SB" is constructed solely to cancel out SB when they're averaged, so it's no surprise that it works that way.
Besides, the structure of the presumptuous philosopher doesn't seem like an average between those two structures. PP claims to know the probability that he's in world (2) (where world (1) has 1 person and (2) has 2). How would you turn this into a decision problem? You say you'll give him a dollar if he guesses right. Following your rules, he adds indexical utilities. So guessing (2) will give hims $2, and guessing (1) will give hims $1. This structure is identical to the sleeping-beauty problem.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-02-17T21:49:25.106Z · LW(p) · GW(p)
Following your rules, he adds indexical utilities
My rules only apply to copies from the same person. It's precisely because he doesn't care about the other 'copies' of himself that the presumptuous philospher is different from sleeping beauty.
Replies from: Manfred↑ comment by Manfred · 2011-02-18T00:01:10.775Z · LW(p) · GW(p)
EDIT: I should note I'm talking about inserting the "presumptuous philosopher" program into the sleeping beauty situation. Your first sentence seems to imply the original problem, and then your second sentence is back to the sleeping beauty problem.
Still edit: It seems that you are solving the wrong problem. You're solving the decision problem where you give someone in the universe a dollar and show that an indifferent philosopher is indifferent to living in a universe where 10 other people get a dollar vs. a universe where 1 other person gets a dollar. But he was designed to be indifferent to whether other people get a dollar, so it's all good. However, that doesn't appear to have any bearing on probabilities. You can only get probabilities from decisions if you also have a utility function that uses probabilities, and then you can run it in reverse to get probabilities from utility functions. However, indifferent philosopher is indifferent! He doesn't care what other people do. His utility function cannot be solved for probabilities, because all the terms that would depend on them are "0."
comment by casebash · 2016-04-16T11:41:32.164Z · LW(p) · GW(p)
If you are trying to calculate probability of X being true by using bets, then the way that is by seeing when you are indifferent between receiving $A if X is true and $B if X is false and then applying maths. You can't calculate probability by using a weird utility function. If you use a weird utility function then you end up calculating something completely different.
comment by JGWeissman · 2011-02-17T18:38:26.472Z · LW(p) · GW(p)
Couldn't you model indifference to your copy's well being by just not having your utility function reference your copy?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-02-17T22:41:44.003Z · LW(p) · GW(p)
There is great debate as to what probability you should use in this case. I'm extending my previous work which - arguably - solves the sleeping beauty in the cases where you do care about your copy. Since ASB seems pretty clear as well, I put two more certain cases together to try and solve an uncertain one.
Replies from: JGWeissman↑ comment by JGWeissman · 2011-02-17T22:52:59.892Z · LW(p) · GW(p)
There is great debate as to what probability you should use in this case.
Using UDT, it seems straightforward that a contract with a utility of 1 given a big universe has an expected utility of 1/2. It's not clear to me to if that means the probability should be 1/2, but the whole point is to be able to compute expected utilities.
comment by AlephNeil · 2011-02-17T16:51:57.197Z · LW(p) · GW(p)
My current approach to the Presumptuous Philosopher problem is to undermine it using the concept of the Anthropic Volume Knob.
The idea is that for any cosmological theory T, as long as it predicts intelligent life, one can trivially devise a theory T' equivalent to T in terms of empirical predictions but has its anthropic "volume" raised or lowered to an arbitrary degree. For instance, you can "double" the anthropic volume by putting T' = T + "oh and there are two of these universes existing in parallel, not interacting in any way". You could "halve" the anthropic volume by putting T' = T + "oh and before the big bang there was a coin toss to determine whether or not there should be a big bang."
But I want to say that if T' has been obtained from T merely by fiddling with its anthropic volume knob then we ought to regard T' as "the same theory". In other words, I don't think that the difference between T' and T is meaningful. Theory A, predicting a great abundance of intelligent life loses its 'anthropic advantage' over theory B, predicting a sparsely populated universe, because B is 'essentially the same' as the empirically equivalent theory B' whose anthropic volume has been turned way up to match A.