Logical and Indexical Uncertainty
post by Scott Garrabrant · 2014-01-29T21:49:53.387Z · LW · GW · Legacy · 18 commentsContents
18 comments
Cross-posted on By Way of Contradiction
Imagine I shot a photon at a half silvered mirror which reflects the photon with "probability" 1/2 and lets the photon pass through with "probability" 1/2.
Now, Imagine I calculated the trillionth decimal digit of pi, and checked whether it was even or odd. As a Bayesian, you use the term "probability" in this situation too, and to you, the "probability" that the digit is odd is 1/2.
What is the difference between these too situations? Assuming the many worlds interpretation of quantum mechanics, the first probability comes from indexical uncertainty, while the second comes from logical uncertainty. In indexical uncertainty, both possibilities are true in different parts of whatever your multiverse model is, but you are unsure which part of that multiverse you are in. In logical uncertainty, only one of the possibilities is true, but you do not have information about which one. It may seem at first like this should not change our decision theory, but I believe there are good reasons why we should care about what type of uncertainty we are talking about.
I present here 6 reasons why we potentially care about the 2 different types of uncertainties. I do not agree with all of these ideas, but I present them anyway, because it seems reasonable that some people might argue for them. Is there anything I have missed?
1) Anthropics
Suppose Sleeping Beauty volunteers to undergo the following experiment, which is described to her before it begins. On Sunday she is given a drug that sends her to sleep, and a coin is tossed. If the coin lands heads, Beauty is awakened and interviewed on Monday, and then the experiment ends. If the coin comes up tails, she is awakened and interviewed on Monday, given a second dose of the sleeping drug that makes her forget the events of Monday only, and awakened and interviewed again on Tuesday. The experiment then ends on Tuesday, without flipping the coin again. Beauty wakes up in the experiment and is asked, "With what subjective probability do you believe that the coin landed tails?"
People argue about whether the "correct answer" to this question should be 1/3 or 1/2. Some say that the question is malformed, and needs to be rewritten as a decision theory question. Another view is that the question actually depends on the coin flip:
If the coin flip is a indexical coin flip, then there are effectively 3 copies of sleeping beauty, and in 1 on those copies, the coin came up tails, so you should say 1/3. On the other hand, if it is a logical coin flip, then you cannot compare the two copies of you waking up in one possible world with the one copy of you waking up in the other possible world. Only one of the worlds is logically consistent. The trillionth digit of pi is not changed by you waking up, and you will wake up regardless of the state of the trillionth digit of pi.
2) Risk Aversion
Imagine that I were to build a doomsday device. The device flips a coin, and if the coin comes up heads, it destroys the Earth, and everything on it. If the coin comes up tails, it does nothing. Would you prefer if the coin flip were a logical coin flip, or a indexical coin flip?
You probably prefer the indexical coin flip. It feels more safe to have the world continue on in half of the universes, then to risk destroying the world in all universes. I do not think this feeling arises from biassed thinking, but instead from a true difference in preferences. To me, destroying the world in all of the universes is actually much more than twice as bad as destroying the world in half of the universes.
3) Preferences vs Beliefs
In updateless decision theory, you want to choose the output of your decision procedure. If there are multiple copies of yourself in the universe, you do not ask about which copy you are, but instead just choose the output which maximizes your utility of the universe in which all of your copies output that value. The "expected" utility comes from your logical uncertainty about what the universe is like. There is not much room in this theory for indexical uncertainty. Instead the indexical uncertainty is encoded into your utility function. The fact that you prefer to be given a reward with indexical probability 99% than given a reward with indexical probability 1% should instead be viewed as you preferring the universe in which 99% of the copies of you receive the reward to the universe in which 1% of the copies of you receive the reward.
In this view, it seems that indexical uncertainty should be viewed as preferences, while logical uncertainty should be viewed as beliefs. It is important to note that this all adds up to normality. If we are trying to maximize our expected utility, the only thing we do with preferences and beliefs is multiply them together, so for the most part it doesn't change much to think of something as a preference as opposed to belief.
4) Altruism
In Subjective Altruism, I asked a question about whether or not when being altruistic towards someone else, you should try to maximize their expected utility relative to you probability function or relative to their probability function. If your answer was to choose the option which maximizes your expectation of their utility, then it is actually very important whether indexical uncertainty is a belief or a preference.
5) Sufficient Reflection
In theory, given enough time, you can settle logical uncertainties just by thinking about them. However, given enough time, you can settle indexical uncertainties by making observations. It seems to me that there is not a meaningful difference between observations that take place entirely within your mind and observations about the outside world. I therefore do not think this difference means very much.
6) Consistency
Logical uncertainty seems like it is harder to model, since it means you are assigning probabilities to possibly inconsistent theories, and all inconsistent theories are logically equivalent. You might want some measure of equivalence of your various theories, and it would have to be different from logical equivalence. Indexical uncertainty does not appear to have the same issues, at least not in an obvious way. However, I think this issue only comes from looking at the problem in the wrong way. I believe that probabilities should only be assigned to logical statements, not to entire theories. Then, since everything is finite, you can treat sentences as equivalent only after you have proven them equivalent.
7) Counterfactual Mugging
Omega appears and says that it has just tossed a fair coin, and given that the coin came up tails, it decided to ask you to give it $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don't want to give up your $100. But Omega also tells you that if the coin came up heads instead of tails, it'd give you $10000, but only if you'd agree to give it $100 if the coin came up tails.
It seems reasonable to me that people might feel very different about this question based on whether or not the coin is logical or indexical. To me, it makes sense to give up the $100 either way, but it seems possible to change the question in such a way that the type of coin flip might matter.
18 comments
Comments sorted by top scores.
comment by wedrifid · 2014-02-01T05:47:23.863Z · LW(p) · GW(p)
Now, Imagine I calculated the trillionth decimal digit of pi, and checked whether it was even or odd. As a Bayesian, you use the term "probability" in this situation too, and to you, the "probability" that the digit is odd is 1/2.
To me the probability that the trillionth decimal digit of Pi is odd is about to 0.05. The trillionth digit of Pi is 2 (but there is about a one it twenty chance that I'm confused). For some reason people keep using that number as an example of a logical uncertainty so I looked it up.
When a logical coin is:
a) Already evaluated.
b) Comparatively trivial to re-calculate. (Humans have calculated the two quadrillionth digit of Pi. The trillionth digit is trivial.)
c) Used sufficiently frequently that people know not just where to look up the answer but remember it from experience.
...Then it is probably time for us to choose a better coin. (Unfortunately I haven't yet found a function that exhibits all the desideratum I have for an optimal logically uncertain coin.)
Replies from: army1987, andrew-sauer↑ comment by A1987dM (army1987) · 2014-02-01T11:48:16.179Z · LW(p) · GW(p)
(Unfortunately I haven't yet found a function that exhibits all the desideratum I have for an optimal logically uncertain coin.)
Is floor(exp(3^^^3)) even or odd?
↑ comment by andrew sauer (andrew-sauer) · 2023-11-14T01:58:55.847Z · LW(p) · GW(p)
the vigintillionth digit of pi
comment by [deleted] · 2014-01-31T14:27:26.609Z · LW(p) · GW(p)
For scenario 7, I think I may have generated a type of situation where the type of coin flip might matter, but I feel like I also may have made an error somewhere. I'll post what I have so far for verification.
To explain, imagine that Omega knows in advance, that the logical coin flip is going to be tails every time he flips the logical coin, because odd is tails and he is asking about the first digit of pi, which is odd.
Now, in this case, you would also know the first digit of pi is odd, so that wouldn't be an information asymmetry. You just wouldn't play if you knew Omega has made a logical decision to use a logical coin that came up tails, because you would never even hypothetically have ever gotten money. It would be as if Omega said: "1=1, so I decided to ask you to give me $100 dollars. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don't want to give up your $100. But I'm also telling you that if 0=1, I'd give you $10000, but only if you'd agree to give me $100 if 1=1." It seems reasonable to not give Omega money in that case.
However, since Omega has more computing power, there are always going to be logical coins that look random to you that Omega can use: Maybe the trillionth digit of pi is unknown to you, but Omega calculated it before hand thinking about making you any offers, and it happens to have been odd/tails.
Omega can even do something which has indexical random components and logical components that ends up being logically calculatable. If Omega rolls an indexical six sided die, adds 761(A 'random' seed) to the number, and then chooses to check the even/odd status anywhere from the 762nd digit of pi through the 767th digit of pi from the results of the die roll. If it's odd, the coin is tails. That is the Feynmann point, http://en.wikipedia.org/wiki/Feynman_point All the digits are 9, which is odd, so the coin is always tails.
If this is a simple indexical coin flip, Omega can't have that sort of advance knowledge.
However, what I find confusing is that asking about the recorded past result of an indexical coin flip appears to be a type of logical uncertainty. So since this has occurred in the past, what seems to be the exact same reasoning would tell me not to give Omega the money right now, because I can't trust that Omega won't exploit the information asymmetry.
This is where I was concerned I had missed something, but I'm not sure what if anything I am missing.
Replies from: Scott Garrabrant↑ comment by Scott Garrabrant · 2014-01-31T20:38:51.916Z · LW(p) · GW(p)
I think that all you are observing here is that your probability that other agents know the result of the coin flip changes between the two situations. However, others can know the result for either type of flip, so this is not really a qualitative difference. It is a way in which other information about the coin flip matters, other than just whether or not it is logical.
You achieve this by making the coin flip correlated with other facts, which is what you did. (I think made more confusing and veiled by the fact that these facts are within the mind of Omega.)
Omega does not have to have advance knowledge of an indexical coin flip. He just needs knowledge, which he can have.
comment by cousin_it · 2014-01-30T10:39:14.604Z · LW(p) · GW(p)
Your point 2 seems to be about anthropics, not risk aversion. If you replace "destroying the world" with "kicking a cute puppy", I become indifferent between indexical and logical coins. If it's "destroying the world painfully for all involved", I also get closer to being indifferent. Likewise if it's "destroying the world instantly and painlessly", but there's a 1% indexical chance that the world will go on anyway. The difference only seems to matter when you imagine all your copies disappearing.
And even in that case, I'm not completely sure that I prefer the indexical coin. The "correct" multiverse theory might be one that includes logically inconsistent universes anyway ("Tegmark level 5"), so indexical and logical uncertainty become more similar to each other. That's kinda the approach I took when trying to solve Counterfactual Mugging with a logical coin.
comment by Shmi (shminux) · 2014-01-31T01:15:48.188Z · LW(p) · GW(p)
It occurs to me that your argument 1 is set up strangely: it assumes perfect correlation between flips, which is not how a coin is assumed to behave. If instead you pre-pick different large uncorrelated digits for each flip, then the difference between the uncertainties disappears. It seems that similar correlations are to blame for the caring about the type of uncertainty in other cases, as well.
Replies from: Scott Garrabrant↑ comment by Scott Garrabrant · 2014-01-31T06:05:58.034Z · LW(p) · GW(p)
I have no idea what you are saying here.
comment by Shmi (shminux) · 2014-01-30T07:33:41.807Z · LW(p) · GW(p)
I just wanted to note that this is not a legal argument, if you need 6 reasons and not 1, then none of the 6 are any good.
Replies from: Scott Garrabrant↑ comment by Scott Garrabrant · 2014-01-30T08:16:45.043Z · LW(p) · GW(p)
I was not trying to argue anything. I was trying to present a list of differences. I personally think 2 and 3 each on their own are enough to justify the claim that the distinction is important.
comment by Vulture · 2014-01-29T22:56:04.452Z · LW(p) · GW(p)
This should be in Main.
Replies from: lmm, Scott Garrabrant↑ comment by lmm · 2014-01-30T00:47:11.418Z · LW(p) · GW(p)
Disagree. It's asking interesting questions, but not giving many answers; it's perfect for Discussion.
Replies from: Scott Garrabrant↑ comment by Scott Garrabrant · 2014-01-30T00:53:02.826Z · LW(p) · GW(p)
Yeah, that's what I was thinking.
↑ comment by Scott Garrabrant · 2014-01-30T00:06:23.228Z · LW(p) · GW(p)
I'm not so sure, but if others agree, I'll upgrade it. Do I just do that by editing it and posting to main instead?
Replies from: shminux, solipsist↑ comment by Shmi (shminux) · 2014-01-30T07:30:13.373Z · LW(p) · GW(p)
I recommend waiting for a couple of days, and if you get 20 karma or so, then move to Main.
↑ comment by solipsist · 2014-01-31T20:36:42.507Z · LW(p) · GW(p)
I voted up, but do not think it should go in main.
Replies from: Scott Garrabrant↑ comment by Scott Garrabrant · 2014-01-31T20:44:05.707Z · LW(p) · GW(p)
My measure of "others agree" was Vulture's comment's karma, not the karma of the post. I think that that measure settles the fact that I put the post in the correct place.