Chocolate Ice Cream After All?
post by pallas · 2013-12-09T21:09:10.841Z · LW · GW · Legacy · 78 commentsContents
Overview The school mark is already settled Newcomb’s Problem Newcomb’s Problem’s Problem of Free Will The A,B-Game Screening off A Weak Omega in The A,B-Game Newcomb’s versus Solomon’s Problem What about Newcomb’s Soda? Newcomb’s Soda in four variations Newcomb-Soda and Precommitments The tickle defense in Newcomblike problems None 78 comments
I have collected some thoughts on decision theory and am wondering whether they are any good, or whether I’m just thinking non-sense. I would really appreciate some critical feedback. Please be charitable in terms of language and writing style, as I am not a native English speaker and as this is the first time I am writing such an essay.
Overview
- The classical notion of free will messes up our minds, especially in decision-theoretic problems. Once we come to see it as confused and reject it, we realize that our choices in some sense not only determine the future but also the past.
- If determining the past conflicts with our intuitions of how time behaves, then we need to adapt our intuitions.
- The A,B-Game shows us that, as far as the rejection of free will allows for it, it is in principle possible to choose our genes.
- Screening off only applies if we consider our action to be independent of the variable of interest – at least in expectation.
- When dealing with Newcomblike problems, we have to be clear about which forecasting powers are at work. Likewise, it turns out to be crucial to precisely point out which agent knows how much about the setting of the game.
- In the standard version of Newcomb’s Soda, one should choose chocolate ice cream – unless the game were specified in a way that previous subjects did not (unlike us) know of any interdependence of soda and ice cream.
- Variations of Newcomb’s Soda suggest that the evidential approach makes us better off.
- The analysis of Newcomb’s Soda shows that its formulation fundamentally differs from the formulation of Solomon’s Problem.
- Given all study-subjects make persistent precommitments, a proper use of evidential reasoning suggests precommitting to take chocolate ice cream. This is why Newcomb’s Soda does not show that the evidential approach is dynamically inconsistent.
- The tickle defense does not apply to the standard medical version of Solomon’s Problem. In versions where it applies, it does not tell us anything non-trivial.
- Evidential reasoning seems to be a winning approach not only in Newcomb’s Problem, but also in Newcomb’s Soda and in the medical version of Solomon’s Problem. Therefore, we should consider a proper use of evidential reasoning as a potentially promising component when building the ultimate decision algorithm.
In the standard formulation of Newcomb’s Soda, the evidential approach suggests picking chocolate ice cream, since this makes it more probable that we will have been awarded the million dollars. Hence, it denies us the thousand dollars we actually could win if we only took vanilla ice cream. Admittedly, this may be counterintuitive. Common sense tells us that considering the thousand dollars, one could change the outcome, whereas one cannot change which type of soda one has drunk; therefore we have to make a decision that actually affects our outcome. Maybe the flaw in this kind of reasoning doesn’t pose a problem to our intuitions as long as we deal with a “causal-intuition-friendly” setting of numbers. So let’s consider various versions of this problem in order to thoroughly compare the two competing algorithmical traits. Let’s find out which one actually wins and therefore should be implemented by rational agents.
In this post, I will discuss Newcomblike problems and conclude that the arguments presented support an evidential approach. Various decision problems have shown that plain evidential decision theory is not a winning strategy. I instead propose to include evidential reasoning in more elaborate decision theories, such as timeless decision theory or updateless decision theory, since they also need to come up with an answer in Newcomblike problems.
By looking at the strategies proposed in those problems, currently discussed decision theories produce outputs that can be grouped into evidential-like and causal-like. I am going to outline which of these two traits a winning decision theory must possess.
Let’s consider the following excerpt by Yudkowsky (2010) about the medical version of Solomon’s Problem:
“In the chewing-gum throat-abscess variant of Solomon’s Problem, the dominant action is chewing gum, which leaves you better off whether or not you have the CGTA gene; but choosing to chew gum is evidence for possessing the CGTA gene, although it cannot affect the presence or absence of CGTA in any way.”
In what follows, I am going to elaborate on why I believe this point (in the otherwise brilliant paper) needs to be reconsidered. Furthermore, I will explore possible objections and have a look at other decision problems that might be of interest to the discussion.
But before we discuss classical Newcomblike problems, let’s first have a look at the following thought experiment:
The school mark is already settled
Imagine you were going to school; it is the first day of the semester. Suppose you only care about getting the best marks. Now your math teacher tells you that he knows you very well and that this would be why he already wrote down the mark you will receive for the upcoming exam. To keep things simple, let’s cut down your options to “study as usual” and “don't study at all”. What are you going to do? Should you learn as if you didn’t know about the settled mark? Or should you not learn at all since the mark has already been written down?
This is a tricky question because the answer to it depends on your credence in the teacher’s forecasting power. Therefore let's consider the following two cases:
- Let's assume that the teacher is correct in 100% of the cases. Now we find ourselves in a problem that resembles Newcomb's Problem since our decision exactly determines the output of his prediction. Just as an agent that really wishes to win the most money should take only one box in Newcomb’s Problem, you should learn for the exams as if you didn't know that the marks are already settled. (EDIT: For the record, one can point out a structural (but not relevant) difference between the two problems: Here, the logical equivalences "learning" <--> "good mark" and "not learning" <--> "bad mark" are part of the game's assumptions, while the teacher predicts in which of these two worlds we live in. In Newcomb's Problem, Omega predicts the logical equivalences of taking boxes and payoffs.)
- Now let's consider a situation where we assume a teacher having no forecasting power at all. In such a scenario the student's future effort behaves independently of the settled marks, that is no matter what input the student provides, the output of the teacher will have been random. Therefore, if we find ourselves in such a situation we shouldn't study for the exam and enjoy the gained spare time.
(Of course we can also think of a case 3) where the teacher's prediction is wrong in 100% of all cases. Let’s specify “wrong” since marks usually don’t work in binaries, so let’s go with “wrong” as the complementary mark. For instance, the best mark corresponds to the worst, the second best to the second worst and so on. In such a case not learning at all and returning an empty exam sheet would determine receiving the best marks. However, this scenario won't be of big interest to us.)
This thought experiment suggests that a deterministic world does not necessarily imply fatalism, since in expectation the fatalist (who wouldn't feel obligated to learn because the marks are "already written down") would lose in cases where the teacher predicts other than random. Generally, we can say that – beside the case 2) – in all the other cases the learning behaviour of the student is relevant for receiving a good mark.
This thought experiment does not only make it clear that determinism does not imply fatalism, but it even shows that fatalists tend to lose once they stop investing ressources in desriable outcomes. This will be important in subsequent sections. Now let us get to the actual topic of this article which already has been mentioned as an aside: Newcomblike problems.
Newcomb’s Problem
The standard version of Newcomb’s Problem has been thoroughly discussed on Lesswrong. Many would agree that one-boxing is the correct solution, for one-boxing agents obtain a million dollars, while two-boxers only take home a thousand dollars. To clarify the structure of the problem: an agent chooses between two options, “AB“ and “B“. When relatively considered, the option B “costs” a thousand dollars because one would abandon transparent box A containing this amount of money. As we play with the predictor Omega, who has an almost 100% forecasting power, our decision determines what past occured, that is we determine whether Omega put a million into box B or not. With determining I mean as much as “being compatible with”. Hence, choosing box B is compatible only with a past where Omega put a million into it.
Newcomb’s Problem’s Problem of Free Will
To many, Newcomb’s Problem seems counterintuitive. People tend to think: “We cannot change the past, as past events have already happened! So there’s nothing we can do about it. Still, somehow the agents that only choose B become rich. How is this possible?“
This uneasy feeling can be resolved by clarifing the notion of “free will”, i.e. by acknowledging that a world state X either logically implies (hard determinism) or probabilistically suggests (hard incompatibilism, stating that free will is impossible and complete determinism is false) another world state Y or a set of possible world states (Y1,Y2,Y3,..,Yn) – no matter if X precedes Y or vice versa. (Paul Almond has shown in his paper on decision theory – unfortunately his page has been down lately – that upholding this distinction does not affect the clarification of free will in decision-theoretic problems. Therefore, I chose to go with hard determinism.)
The fog will lift once we accept the above. Since our action is a subset of a particular world state, the action itself is also implied by preceding world states, that is once we know all the facts about a preceding world state we can derive facts about subsequent world states.
If we look more closely, we cannot really choose in a way that people used to think. Common sense tells us that we confront a “real choice” if our decision is not just determined by external factors and also not picked at random, but governed by our free will. But what could this third case even mean? Despite its intuitive usefulness, the classical notion of choice seems to be an ill-defined term since it requires a problematic notion of free will, that is to say one that ought to be non-random but also not determined at once.
This is why I want to suggest a new definition of choice: Choosing is the way agents execute what they were determined to by other world states. Choosing has nothing to do with “changing” what did or is going to happen. The only thing that actually changes is the perception of what did or is going to happen, since executions produce new data points that call for updates.
So unless we could use a “true” random generator (which would only be possible if we did not assume complete determinism to be true) in order to make decisions, what we are going to do is “planned” and determined by preceding and subsequent world states.
If I take box B, then this determines a past world state where Omega has put a million dollars into this box. If I take both box A and B, then this determines a past world state where Omega has left box B empty. Therefore, when it comes to deciding, taking actions that determine (or are compatible with) not only desirable future worlds, but also desirable past worlds are the ones that make us win.
One may object now that we aren’t “really“ determining the past, but we only determine our perception of it. That’s an interesting point. In the next section we are going to have a closer look on that. For now, I’d like to bring the underlying perception of time into question. Because once I choose only box B, it seems that the million dollars I receive is not just an illusion of my map but it is really out there. Admittedly the past seems unswayable, but this example shows that maybe our conventional perception of time is misleading as it conflicts with the notion of us choosing what happened in the past.
How come self-proclaimed deterministic non-fatalists in fact are fatalists when they deal with the past? I’d suggest to perceive time not as being divided into seperate caterogies like “stuff that has passed “ and “stuff that is about to happen“, but rather as one dimension where every dot is just as real as any other and where the manifestation of one particular dot restrictively determines the set of possible manifestations other dots could embody. It is crucial to note that such a dot would describe the whole world in three spatial dimensions, while subsets of world states could still behave independently.
Perceiving time without an inherent “arrow” is not new to science and philosophy, but still, readers of this post will probably need a compelling reason why this view would be more goal-tracking. Considering the Newcomb’s Problem a reason can be given: Intuitively, the past seems much more “settled” to us than the future. But it seems to me that this notion is confounded as we often know more about the past than we know about the future. This could tempt us to project this disbalance of knowledge onto the universe such that we perceive the past as settled and unswayable in contrast to a shapeable future. However, such a conventional set of intuitions conflicts strongly with us picking only one box. These intuitions would tell us that we cannot affect the content of the box; it is already filled or empty since it has been prepared in the now inaccessible past.
Changing the notion of time into one block would lead to “better” intuitions, because they directly suggested to choose one box, as this action is only compatible with a more desirable past. Therefore we might need to adapt our intution, so that the universe looks normal again. To illustrate the ideas discussed above and to put them into practice, I have constructed the following game:
The A,B-Game
You are confronted with Omega, a 100% correct predictor. In front of you, there are two buttons, A and B. You know that there are two kinds of agents. Agents with the gene G_A and agents with the gene G_B. Carriers of G_A are blessed with a life expectancy of 100 years whereas carriers of G_B die of cancer at the age of 40 on average. Suppose you are much younger than 40. Now Omega predicts that every agent who presses A is a carrier of G_A and every agent that presses B is a carrier of G_B. You can only press one button, which one should it be if you want to live for as long as possible?
People who prefer to live for a hundred years over forty years would press A. They would even pay a lot of money in order to be able to do so. Though one might say one cannot change or choose one’s genes. Now we need to be clear about which definition of choice we make use of. Assuming the conventional one, I would agree that one could not choose one’s genes, but for instance, when getting dressed, one could not choose one’s jeans either, as the conventional understanding of choice requires an empty notion of non-random, not determined free will that is not applicable. Once we use the definition I introduced above, we can say that we choose our jeans. Likewise, we can choose our genes in the A,B-Game. If we one-box in Newcomb’s Problem, we should also press A here, because the two problems are structurally identical (except for the labels “box” versus “gene”).
The notion of objective ambiguity of genes only stands if we believe in some sort of objective ambiguity about which choices will be made. When facing a correct predictor, those of us who believe in indeterministic objective ambiguity of choices have to bite the bullet that their genes would be objectively ambiguous. Such a model seems counterintuitive, but not contradictory. However, I don’t feel forced to adapt this indeterministic view.
Let us focus on the deterministic scenario again: In this case, our past already determined our choice, so there is only one way we will go and only one way we can go.
We don’t know whether we are determined to do A or B. By “choosing” the one action that is compatible only with the more desirable past, we are better off. Just as we don’t know in Newcomb’s Problem whether B is empty or not, we have to behave in a way such that it must have been filled already. From our perspective, with little knowledge about the past, our choice determines the manifestation of our map of the past. Apparently, this is exactly what we do when making choices about the future. Taking actions determines the manifestation of our map of the future. Although the future is already settled, we don’t know yet its exact manifestation. Therefore, from our perspective, it makes sense to act in ways that determine the most desirable futures. This does not automatically imply that some mysterious “change” is going to happen.
In both directions it feels like one would change the manifestation of other world states, but when we look more closely we cannot even spell out what that would mean. The word “change” only starts to become meaningful once we hypothetically compare our world with counterfactual ones (where we were not determined to do what we do in our world). In such a framework we could consistently claim that the content of box B “changes” depending on whether or not we choose only box B.
Screening off
Following this approach of determining one’s perception of the world, the question arises, whether every change in perception is actually goal-tracking. We can ask ourselves, whether an agent should avoid new information if she knew that the new information had negative news value. For instance, if an agent, being suspected of having lung cancer and awaiting the results of her lung biopsy, seeks actions that make more desirable past world states more likely, then she should figure out a way so that she doesn’t receive any mail, for instance by declaring an incorrect postal address. This naive approach obviously fails because of lack of proper use of Bayesian updating. The action ”avoiding to receive mail” screens off the desirable outcome so that once we know about this action we don’t learn anything about the biopsy in (the very probable) case that we don’t receive any mail.
In the A,B-Game, this doesn’t apply, since we believe Omega’s prediction to be true when it says that A necessarily belongs to G _A and B to G_B. Generally, we can distinguish the cases by clarifying existing independencies: In the lung cancer case where we simply don’t know better, we can assume that P(prevention|positive lab result)=P(prevention|negative lab result)=P(prevention). Hence, screening off applies. In the A,B-Game, we should believe that P(Press A|G_A)>P(Press A)=P(Press A|G_A or G_B). We obtain this relevant piece of information thanks to Omega’s forecasting power. Here, screening off does not apply.
Subsequently, one might object that the statement P(Press A|G_A)>P(Press A) leads to a conditional independence as well, at least in cases where not all the players that press A necessarily belong to G_A. Then you might be pressing A because of your reasoning R_1 which would screen off pressing A from G_A. A further objection could be that even if one could show a dependency between G_A and R_1, you might be choosing R_1 because of some meta-reasoning R_2 that again provides a reason not to press A. However, considering these objections more thoroughly, we realize that R_1 has to be congruent or at least evenly associated (in G_A as well as in G_B) with Pressing A. The same works for R_2. If this wasn’t the case, then we would be talking about another game, a game where we knew, for instance, that 90% of the G_A carriers choose button A (without thinking) because of the gene and 10% of the G_B carriers would choose button A because of some sort of evidential reasoning. Knowing this, choosing A out of evidential reasoning would be foolish, since we already know that only G_B carriers could do that. Once we know this, evidential reasoners would suggest not to press A (unless B offers an even worse outcome). So these further objections fail as well, as they implicitly change the structure of the discussed problem. We can conclude that no screening off applies as long as an instance with forecasting power tells us that a particular action makes the desirable outcome likelier.
Now let’s have a look at an alteration of the A,B-Game in order to figure out whether screening-off might apply here.
A Weak Omega in The A,B-Game
Thinking about the A,B-Game, what happens if we decreased Omega’s forecasting power? Let’s assume now that Omega’s prediction is correct only in 90% of all cases. Should this fundamentally change our choice whether to press A or B because we only pressed A as a consequence of our reasoning?
To answer that, we need to be clear about why agents believe in Omega’s predictions. They believe in Omega’s prediction because they were correct so many times. This constitutes Omega’s strong forecasting power. As we saw above, screening off only applies if the predicting instance (Omega, or us reading a study) has no forecasting power at all.
In the A,B-Game, as well as in the original Newcomb’s Problem, we also have to take the predictions of a weaker Omega (with less forecasting power) into account, unless we face an Omega that happens to be right by chance (i.e. in 50% of the cases when considering a binary decision situation).
If, in the standard A,B-Game, we consider pressing A to be important, and if we were willing to spend a large amount of money in order to be able to press A (suppose the button A would send a signal to cause a withdrawal from our bank account), then this amount should only gradually shrink once we decrease Omega’s forecasting power. The question now arises whether we also had to “choose” the better genes in the medical version of Solomon’s Problem and whether there might not be a fundamental difference between it and the original Newcomb’s Problem.
Newcomb’s versus Solomon’s Problem
In order to uphold this convenient distinction, people tell me that “you cannot change your genes” though that’s a bad argument since one could reply “according to your definition of change, you cannot change the content of box B either, still you choose one-boxing”. Further on, I quite often hear something like “in Newcomb’s Problem, we have to deal with Omega and that’s something completely different than just reading a study”. This – in contrast to the first – is a good point.
In order to accept the forecasting power of a 100% correct Omega, we already have to presume induction to be legitimate. Or else one could say: “Well, I see that Omega has been correct in 3^^^3 cases already, but why should I believe that it will be correct the next time?”. As sophisticated this may sound, such an agent would lose terribly. So how do we deal with studies then? Do they have any forecasting power at all? It seems that this again depends on the setting of the game. Just as Omega’s forecasting power can be set, the forecasting power of a study can be properly defined as well. It can be described by assigning values to the following two variables: its descriptive power and its inductive power. To settle them, we have to answer two questions: 1. How correct is the study's description of the population? 2. How representative is the population of the study to the future population of agents acting in knowledge of the study? Or in other words, to what degree can one consider the study subjects to be in one’s reference class in order to make true predictions about one’s behaviour and the outcome of the game? Once this is clear, we can then infer the forecasting power. How much forecasting power does the study have? Let’s assume that the study we deal with is correct in what it describes. Those who wish can use a discounting factor. However, this is not important for subsequent arguments and would only make it more complicated.
Considering the inductive power, it get’s more tricky. Omega’s predictions are defined to be correct. In contrast, the study’s predictions have not been tested. Therefore we are quite uncertain about the study’s forecasting power. It were 100% if and only if every factor involved was specified so that the total of them compel identical outcomes in the study and our game. Due to induction, we do have reason to assume a positive value of forecasting power. To identify its specific value (that discounts the forecasting power according to the specified conditions), we would need to settle every single factor that might be involved. So let’s keep it simple by applying a 100% forecasting power. As long as there is a positive value of forecasting power, the basic point of the subsequent arguments (that presume a 100% forecasting power) will also hold when discounted.
Thinking about the inductive power of the study, there still is one thing that we need to specify: It is not clear what exactly previous subjects of the study knew.
For instance in a case A), the subjects of the study knew nothing about the tendency of CGTA-carriers to chew gum. First, their genom was analyzed, then they had to decide whether or not to chew gum. In such a case, the subjects‘ knowledge is quite different from those who play the medical version of Solomon’s Problem. Therefore screening off applies. But does it apply to the same extent as in the avoiding-bad-news example mentioned above? That seems to be the case. In the avoiding-bad-news example, we assumed that there is no connection between the variables „lung cancer“ and „avoiding mail“. In Solomon’s Problem such an indepence can be settled as well. Then the variables „having the gene CGTA“ and „not chewing gum because of evidential reasoning“ are also assumed to be independent. Total screening off applies. Considering an evidential reasoner who knows that much, choosing not to chew gum would then be as irrational as declaring an incorrect postal address when awaiting biopsy results.
Now let us consider a case B) where the subjects were introduced to the game just as we were. Then they would know about the tendency of CGTA-carriers to chew gum, and they themselves might have used evidential reasoning. In this scenario, screening off does not apply. This is why not chewing gum would be the winning strategy.
One might say that of course the study-subjects did not know of anything and that we should assume case A) a priori. I only partially agree with that. The screening off can already be weakend if, for instance, the subjects knew why the study was conducted. Maybe there was anecdotal evidence about heredity of a tendency to chew gum, which was about to be confirmed properly.
Without further clarification, one can plausibly assume a probability distribution over various intermediate cases between A and B where screening off becomes gradually fainter when getting closer to B. Of course there might also be cases where anecdotal evidence leads astray, but in order to cancel out the argument above, anecdotal evidence needs to be equalized with in expectation knowing nothing at all. But since it seems to be better (even though not much) than knowing nothing, it is not a priori clear that we have to assume case A right away.
So when compiling a medical version of Solomon’s Problem, it is important to be very clear about what the subjects of the study were aware of.
What about Newcomb’s Soda?
After exploring screening off and possible differences between Newcomb’s Problem and Solomon’s Problem (or rather between Omega and a study), let’s investigate those questions in another game. My favourite of all Newcomblike problems is called Newcomb’s Soda and was introduced in Yudkowsky (2010). Comparing Newcomb’s Soda with Solomon’s Problem, Yudkowsky writes:
“Newcomb’s Soda has the same structure as Solomon’s Problem, except that instead of the outcome stemming from genes you possessed since birth, the outcome stems from a soda you will drink shortly. Both factors are in no way affected by your action nor by your decision, but your action provides evidence about which genetic allele you inherited or which soda you drank.”
Is there any relevant difference in structure between the two games?
In the previous section, we saw that once we settle that the study-subjects in Solomon’s Problem don’t know of any connection between the gene and chewing gum, screening off applies and one has good reasons to chew gum. Likewise, the screening off only applies in Newcomb’s Soda if the subjects of the clinical test are completely unaware of any connection between the sodas and the ice creams. But is this really the case? Yudkowsky introduces the game as one big clinical test in which you are participating as a subject:
“You know that you will shortly be administered one of two sodas in a double-blind clinical test. After drinking your assigned soda, you will enter a room in which you find a chocolate ice cream and a vanilla ice cream. The first soda produces a strong but entirely subconscious desire for chocolate ice cream, and the second soda produces a strong subconscious desire for vanilla ice cream.”
This does not sound like previous subjects had no information about a connection between the sodas and the ice creams. Maybe you, and you alone, received those specific insights. If this were the case, it clearly had to be mentioned in the game’s definition, since this factor is crucial when it comes to decision-making. Considering a game where the agent herself is a study-subject, without further specification, she wouldn’t by default expect that other subjects knew less about the game than she did. Therefore let’s assume in the following that all the subjects in the clinical test knew that the sodas cause a subconscious desire for a specific flavor of ice cream.
Newcomb’s Soda in four variations
Let “C” be the causal approach which states that one has to choose vanilla ice cream in Newcomb’s Soda. C only takes the $1,000 of the vanilla ice cream into account since one still can change the variable “ice cream”, whereas the variable “soda” is already settled. Let “E” be the evidential approach which suggests that one has to choose chocolate or vanilla ice cream in Newcomb’s Soda – depending on the probabilities specified. E takes both the $1,000 of the vanilla ice cream and the $1,000,000 of the chocolate soda into account. In that case, one argument can outweigh the other.
Let’s compile a series of examples. We denote “Ch” for chocolate, “V” for vanilla, “S” for soda and “I” for ice cream. In all versions Ch-S will receive $1,000,000 and V-I will receive $1,000 and P(Ch-S)=P(V-S)=0.5. Furthermore we settle that P(Ch-I|Ch-S)=P(V-I|V-S) and call this term “p” in every version so we don’t vary unnecessarily many parameters. As we are going to deal with large numbers, let’s assume a linear monetary value utility function.
Version 1: Let us assume a case where the sodas are dosed homeopathically, so that no effect on the choice of ice creams can be observed. Ch-S and V-S choose from Ch-I and V-I randomly so that p=P(V-I|Ch-S)=P(Ch-I|V-S)=0.5. Both C and E choose V-I and win 0.5 *$1,001,000 + 0.5*$1000=$501,000 in expectation. C only considers the ice cream whereas E considers the soda as well, though in this case the soda doesn’t change anything as the Ch-S are equally distributed over Ch-I and V-I.
Version 2: Here p=0.999999. Since P(Ch-S)=P(V-S)=0.5, one Ch-I in a million will have originated from V-S, whereas one V-I in a million will have originated from Ch-S. The other 999,999 Ch-I will have determined the desired past, Ch-S, due to their choice of Ch-I. So if we participated in this game a million times and tracked E that suggests choosing Ch-I each time, we overall could have expected to win 999,999*$1,000,000=$999,999,000,000. This is different to following C’s advice. As C tells us that we cannot affect which soda we have drunk we would choose V-I each time and could expect to win 1,000,000*$1,000+$1,000,000=$1,001,000,000 in total. The second outcome, which C is responsible for, is 999 times worse than the first (which was suggested by E). In this version, E clearly outperforms C in helping us to make the most money.
Version 3: Now we have p=1. This version is equivalent to the standard version of the A,B-Game. What would C do? It seems that C ought to maintain its view that we cannot affect the soda. Therefore, only considering the ice cream-part of the outcome, C will suggest choosing V-I. This seems to be absurd: C leaves us disappointed with $1,000, whereas E makes us millionaires every single time.
A C-defender might say: “Wait! Now you have changed the game. Now we are dealing with a probability of 1!” The response would be : “Interesting, I can make p get as close to 1 as I want as long as it isn’t 1 and the rules of the game and my conclusions would still remain. For instance, we can think of a number like 0.999…(100^^^^^100 nines in a row). So tell me why exactly the probability change of 0.000…(100^^^^^100 -1 zeros in a row)1 should make you switch to Ch-I? But wait, why would you – as a defender of C – even consider Ch-I since it cannot affect your soda while it definitely prevents you from winning the $1,000 of the ice cream?”
The previous versions tried to exemplify why taking both arguments (the $1,000 and the $1,000,000) into account makes you better off at the one edge of the probability measure, whereas at the other edge, C and E produce the same outcomes. With a simple equation we can figure out for which p E would be indifferent about whether to choose Ch-I or V-I: solve(p*1,000,000=(1-p)*1,000,000+1,000,p). This gives us p=0.5005. So for 0.5005<p<=1 E does better than C and for 0<=p<=0.5005 E and C behave alike. Finally, let us consider the original version:
Version 4: Here we deal with p=0.9. According to the above we could already deduce that deciding according to E makes us better off, but let’s have a closer look at it for the sake of completeness: In expectation, choosing V-I makes us win 0.1*$1,000,000+$1,000=$101,000, whereas Ch-I leaves us with 0.9*$1,000,000=$900,000 almost 9 times richer. After the insights above, it shouldn’t surprise us too much that E clearly does better than C in the original version of Newcomb’s Soda as well.
The variations above illustrated that C had to eat V-I even if 99.9999% of C-S choose C-I and 99.9999% of V-S eat V-I. If you played it a million times, in expectation C-I would win the million 999,999 times and V-I just once. Can we really be indifferent about that? Wasn’t it all about winning and losing? And who is winning here and who is losing?
Newcomb-Soda and Precommitments
Another excerpt from Yudkowsky (2010):
“An evidential agent would rather precommit to eating vanilla ice cream than precommit to eating chocolate, because such a precommitment made in advance of drinking the soda is not evidence about which soda will be assigned.”
At first sight this seems intuitive. But if we look at the probabilities more closely suddenly a problem arises: Let’s consider an agent that precommits (let’s assume a 100% persistent mechanism) one’s decision before a standard game (p=0.9) starts. Let’s assume that he precommits – as suggested above – to choose V-I. What credence should he assign to P(Ch-S|V-I)? Is it 0.5 as if he didn’t precommit at all or does something change? Basically, adding precommitments to the equation inhibits the effect of the sodas on the agent’s decision. Again, we have to be clear about which agents are affected by this newly introduced variable. If we were the only ones who can precommit 100% persistently, then our game fundamentally differs from the previous subjects’ one. If they didn’t precommit, we couldn’t presuppose a forecasting power anymore because the previous subjects decided according to the soda’s effect, whereas we now decide independently of that. In this case, E would suggest to precommit to V-I. However, this would constitute an entirely new game without any forecasting power. If all the agents of the study make persistent precommitments, then the forecasting power holds; the game doesn’t change fundamentally. Hence, the way previous subjects behaved remains crucial to our decision-making. Let’s now imagine that we were playing this game a million times. Each time we irrevocably precommit to V-I. In this case, if we consider ourselves to be sampled randomly among V-I, we can expect to originate from V-S 900,000 times. If we approach p to 1 we see that it gets desperately unlikely to originate from Ch-S once we precommit ourselves to V-I. So a rational agent following E should precommit Ch-I in advance of drinking the soda. Since E suggests Ch-I both during and before the game, this example doesn’t show that E would be dynamically inconsistent.
In the other game, where only we precommit persistently and the previous subjects don’t, picking V-I doesn’t make E dynamically inconsistent, as we would face another decision situation where no forecasting power applies. Of course we can also imagine intermediate cases. For instance one, where we make precommitments and the previous subjects were able to make them as well, but we don’t know whether they did. The more uncertain we get about their precommitments, the more we approach the case where only we precommit while the forecasting power gradually weakens. Those cases are more complicated, but they do not show a dynamical inconsistency of E either.
The tickle defense in Newcomblike problems
In the last section I want to have a brief look at the tickle defense, which is sometimes used to defend evidential reasoning by offering a less controversial output. For instance, it states that in the medical version of Solomon’s Problem an evidential reasoner should chew gum, since she can rule out having the gene as long as she doesn’t feel an urge to chew gum. So chewing gum doesn’t make it likelier to have the gene since she already has ruled it out.
I believe that this argument fails since it changes the game. Suddenly, the gene doesn’t cause you to “choose chewing gum” anymore but to “feel an urge to choose chewing gum”. Though I admit, in such a game a conditional independence would screen off the action “not chewing gum” from “not having the gene” – no matter what the previous subjects of the study knew. This is why it would be more attractive to chew gum. However, I don’t see why this case should matter to us. In the original medical version of Solomon’s Problem we are dealing with another game where this particular kind of screening off does not apply. As the gene causes one to “choose chewing gum” we can only rule it out by not doing so. However, this conclusion has to be treated with caution. For one thing, depending on the numbers, one can only diminish the probability of the undesirable event of having the gene – not rule it out completely; for another thing, the diminishment only works if the previous subjects were not ignorant of a depedence of the gene and chewing gum – at least in expectation. Therefore the tickle defense only trivially applies to a special version of the medical Solomon’s Problem and fails to persuade proper evidential reasoners to do anything differently in the standard version. Depending on the specification of the previous subjects’ knowledge, an evidential reasoner would still chew or not chew gum.
78 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2013-12-10T00:03:20.263Z · LW(p) · GW(p)
Probably should be in Discussion.
comment by SilentCal · 2013-12-10T22:39:12.394Z · LW(p) · GW(p)
My thoughts: 1) The failure of CDT is its modeling of the decision process as ineffable 'free will' upon which things in the past cannot depend. Deviation from CDT is justified only when such dependencies exist. 2) The assumption that your decision is predictable requires the existence of such a dependency. 3) If we postulate that no such dependency exists, either CDT wins or our postulates are contradictory.
In particular, in Newcomb's Soda, the assumptions that the soda flavor predicts the ice-cream flavor with high probability and that the assignment of soda (and the choice of subjects) is uncorrelated with subjects' decision theory require that we are exceptional in employing decision theory. If lots of subjects were using CDT or EDT, they would all be choosing ice cream independently of their soda, and we wouldn't see that correlation (except maybe by coincidence). So it doesn't have to be stated in the problem that other subjects aren't using evidential reasoning--it can be seen plainly from the axioms! To assume that they are reasoning as you are is to assume a contradiction.
The AB game is confusing because it flirts with contradiction. You act as if you're free to choose A or B according to your decision theory while simultaneously assuming that Omega can predict your choice perfectly. But in fact, the only way Omega can predict perfectly is by somehow interacting with your decision theory. He can either administer the game only to people whose decision theory matches their genes, or manipulate people's answers, or manipulate their genes. In the first case, EDTers will get a free gene test if they have G_A, but will not be miraculously healed if they have G_B. In the second case, you'll find yourself pressing 'B' if you have G_B no matter what you try to precommit to. In the third case only, you have legitimate reason to commit to 'A', because your predetermined decision has causal influence on your genes.
You might try to counter with the case where Omega ensures that all children who are born will answer in a way consistent with their genes, and both things are determined at conception. But if this is the case, then if you have G_B, you can't commit yourself to EDT no matter how hard you think about decision theory. This follows from the assumptions, and the only reason to think otherwise is if you still count free will among your premises.
Replies from: pallas↑ comment by pallas · 2013-12-11T00:56:32.109Z · LW(p) · GW(p)
If lots of subjects were using CDT or EDT, they would all be choosing ice cream independently of their soda, and we wouldn't see that correlation (except maybe by coincidence). So it doesn't have to be stated in the problem that other subjects aren't using evidential reasoning--it can be seen plainly from the axioms! To assume that they are reasoning as you are is to assume a contradiction.
If lots of subjects were using CDT or EDT, they would be choosing ice cream independently of their soda iff the soda has no influence on whether they argue according to CDT or EDT. It is no logical contradiction to say that the sodas might affect which decision theoretic intuitions a subject is going to have. As long as we don't specify what this subconscious desire for ice cream exactly means, it is thinkable that the sodas imperceptibly affect our decision algorithm. In such a case, most of the V-I people (the fraction originating from V-S) would be attracted to causal reasoning, whereas most of the Ch-I people (the fraction originating from Ch-S) would find the evidential approach compelling.
One can say now that the sodas "obviously" do not affect one's decision theory, but this clearly had to be pointed out when introducing a "subconscious desire."
I agree that once it is specified that we are the only agents using decision theory, screening off applies. But the game is defined in a way that we are subjects of a study where all the subjects are rewarded with money:
(an excerpt of the definition in Yudkowsky (2010))
It so happens that all participants in the study who test the Chocolate Soda are rewarded with a million dollars after the study is over, while participants in the study who test the Vanilla Soda receive nothing. But subjects who actually eat vanilla ice cream receive an additional thousand dollars, while subjects who actually eat chocolate ice cream receive no additional payment.
After reading this, it is not a priori clear to me that I would be the only subject who knows about the money at stake. To the contrary, as one of many subjects I assume that I know as much as other subjects know about the setting. Once other subjects know about the money they probably also think about whether choosing Ch-I or V-I produces the better outcome. It seems to me that all the agents base their decision on some sort of intuition about which would be the correct decisional algorithm.
To sum up, I tend to assume that other agents play a decision theoretic game as well and that the soda might affect their decision theoretic intuitions. Even if we assigned a low prior to the event that the sodas affect the subject's decision algorithms, the derived reasoning would not be invalid but it's power would shrink in proportion to the prior. Finally, it is definetly not a contradictory statement to say that the soda affects how the subject's decide and that the subject's use CDT or EDT.
Replies from: SilentCal↑ comment by SilentCal · 2013-12-11T21:49:38.110Z · LW(p) · GW(p)
By 'using [CDT|EDT]', I meant 'adhering to a belief in [CDT|EDT] that predates drinking the soda.' If you're the only one (or one of the only ones) doing this, screening off would apply, right? But if others are doing this, there would be fewer correct predictions. And if you aren't doing this, you'll switch to CDT if you get CS, making your reasoning today for naught.
(Doing decsion theory logically enough to overcome your subconscious desires would have the same effect as sticking to your pre-soda beliefs--either way you get an ice cream choice independent of your soda)
comment by Viliam_Bur · 2013-12-10T09:47:01.282Z · LW(p) · GW(p)
Too many complex ideas in one article. I think it would be better to treat them individually. If some former idea is invalid, then it does not make sense to discuss latter ideas which depend on the former one.
I don't get the idea of the A,B-Game.
You are confronted with Omega, a 100% correct predictor. In front of you, there are two buttons, A and B. You know that there are two kinds of agents. Agents with the gene GA and agents with the gene GB. Carriers of GA are blessed with a life expectancy of 100 years whereas carriers of GB die of cancer at the age of 40 on average. Suppose you are much younger than 40. Now Omega predicts that every agent who presses A is a carrier of GA and every agent that presses B is a carrier of GB.
I try imagining how the gene GB could make people consciously choose to shorten their lives (all other things being equal). Maybe it is a gene for suicide, depression, mental retardation, or perhaps dyslexia (the person presses the wrong button). But it wouldn't work for a random gene. It only works for a gene that specifically fits this scenario.
It is important to note that this Omega scenario presumes different things than the typical one used in the Newcomb’s Problem. In the Newcomb's case, we assume that Omega is able to predict one's thought processes and decisions. That's a very strong and unusual situation, but I can imagine it. However, in the A,B-Game we assume that a specific gene makes people presented with two options choose the worse one -- please note that I have not mentioned Omega in this sentence yet! So the claim is not that Omega is able to predict something, but that the gene can determine something, even in absence of the Omega. It's no longer about Omega's superior human-predicting powers; the Omega is there merely to explain the powers of the gene.
So I am not whether this says anything different than "if you choose not to commit suicide, you caused yourself to not have a certainly-suicide-causing gene". And even in that case, I think it has the direction of the causality wrong. By choosing not to commit suicide, I observe myself as not having a certainly-suicide-causing gene. We could call it a survivor bias or a non-suicidal-anthropic principle. It's kinda similar to the quantum suicide experiment, just instead of the quantum event I am using my genome.
If this argument is incorrect (and to me it seems it is), how much does the rest of the article fall apart?
Replies from: pallas, V_V↑ comment by pallas · 2013-12-10T12:02:45.674Z · LW(p) · GW(p)
Thanks for the comment!
However, in the A,B-Game we assume that a specific gene makes people presented with two options choose the worse one -- please note that I have not mentioned Omega in this sentence yet! So the claim is not that Omega is able to predict something, but that the gene can determine something, even in absence of the Omega. It's no longer about Omega's superior human-predicting powers; the Omega is there merely to explain the powers of the gene.
I think there might be a misunderstanding. Although I don't believe it to be impossible that a gene causes you to think in specific ways, in the setting of the game such a mechanism is not required. You can also imagine a game where Omega predicts that those who pick a carrot out of a basket of vegetables are the ones that will die shortly of a heart attack. As long as we believe in Omega's forecasting power, its statements are relevant even if we cannot point at any underlying causal mechanisms. As long as the predicted situation is logically possible (here, all agents that pick the carrot die), we don't need to reject Omega's prediction just because such a compilation of events would be unlikely. Though we might call Omega's predictions into question. Still, as long as we believe in its forecasting power (after such a update), we have to take the prediction into account. Hence, the A,B-Game holds even if you don't know of any causal connection between the genes and the behaviour, we only need a credible Omega.
Replies from: Richard_Kennaway, Viliam_Bur, Will_Sawin, None↑ comment by Richard_Kennaway · 2013-12-10T15:31:53.506Z · LW(p) · GW(p)
Although I don't believe it to be impossible that a gene causes you to think in specific ways, in the setting of the game such a mechanism is not required.
It is required. If Omega is making true statements, they are (leaving aside those cases where someone is made aware of the prediction before choosing) true independently of Omega making them. That means that everyone with gene A makes choice A and everyone with gene B makes choice B. This strong entanglement implies the existence of some sort of causal connection, whether or not Omega exists.
More generally, I think that every one of those problems would be made clear by exhibiting the causal relationships that are being presumed to hold. Here is my attempt.
For the School Mark problem, the causal diagram I obtain from the description is one of these:
pupil's character ----> teacher's prediction ----> final mark
|
|
V
studying ----> exam performance
or
pupil's character ----> teacher's prediction
|
|
V
studying ----> exam performance ----> final mark
For the first of these, the teacher has waived the requirement of actually sitting the exam, and the student needn't bother. In the second, the pupil will not get the marks except by studying for and taking the exam. See also the decision problem I describe at the end of this comment.
For Newcomb we have:
person's qualities --> Omega's prediction --> contents of boxes
| |
| |
V V
person's decision --------------------------> payoff
(ETA: the second down arrow should go from "contents of boxes" to "payoff". Apparently Markdown's code mode isn't as code-modey as I expected.)
The hypotheses prevent us from performing surgery on this graph to model do(person's decision). The do() operator requires deleting all in-edges to the node operated on, making it causally independent of all of its non-descendants in the graph. The hypotheses of Newcomb stipulate that this cannot be done: every consideration you could possibly employ in making a decision is assumed to be already present in the personal qualities that Omega's prediction is based on.
A-B:
Unknown factors ---> Gene ---> Lifespan
|
|
V
Choice
or:
Gene ---> Lifespan
|
|
V
Choice
or both together.
Here, it may be unfortunate to discover oneself making choice B, but by the hypotheses of this problem, you have no choice. As with Newcomb, causal surgery is excluded by the problem. To the extent that your choice is causally independent of the given arrow, to that extent you can ignore lifespan in making your choice -- indeed, it is to that extent that you have a choice.
For Solomon's Problem (which, despite the great length of the article, you didn't set out) the diagram is:
charisma ----> overthrow
|
|
V
commit adultery
This implies that while it may be unfortunate for Solomon to discover adulterous desires, he will not make himself worse off by acting on them. This differs from A-B because we are given some causal mechanisms, and know that they are not deterministic: an uncharismatic leader still has a choice to make about adultery, and to the extent that it is causally independent of the lack of charisma, it can be made, without regard to the likelihood of overthrow.
Similarly CGTA:
CGTA gene ----> throat abcesses
|
|
V
chew gum
and the variant:
CGTA gene ----> throat abcesses
| ^
| |
V |
chew gum ---------/
(ETA: the arrow from "chew gun" to "throat abcesses" didn't come out very well.)
in which chewing gum is protective against throat abscesses, and positively to be recommended.
Newcomb's Soda:
soda assignment ---> $1M
|
|
V
choice of ice cream ---> $1K
Here, your inclination to choose a flavour of ice-cream is informative about the $1M prize, but the causal mechanism is limited to experiencing a preference. If you would prefer $1K to a chocolate ice-cream then you can safely choose vanilla.
Finally, here's another decision problem I thought of. Unlike all of the above, it requires no sci-fi hypotheses, real-world examples exist everywhere, and correctly solving them is an important practical skill.
I want to catch a train in half an hour. I judge that this is enough time to get to the station, buy a ticket, and board the train. Based on a large number of similar experiences in the past, I can confidently predict that I will catch the train. Since I know I will catch the train, should I actually do anything to catch the train?
The general form of this problem can be applied to many others. I predict that I'm going to ace an upcoming exam. Should I study? I predict I'll win an upcoming tennis match. Should I train for it? I predict I'll complete a piece of contract work on time. Should I work on it? I predict that I will post this. Should I click the "Comment" button?
Replies from: Beluga↑ comment by Beluga · 2013-12-11T23:13:56.646Z · LW(p) · GW(p)
For the School Mark problem, the causal diagram I obtain from the description is one of these:
diagram
or
diagram
For the first of these, the teacher has waived the requirement of actually sitting the exam, and the student needn't > bother. In the second, the pupil will not get the marks except by studying for and taking the exam. See also the decision problem I describe at the end of this comment.
I think it's clear that Pallas had the first diagram in mind, and his point was exactly that the rational thing to do is to study despite the fact that the mark has already been written down. I agree with this.
Think of the following three scenarios:
- A: No prediction is made and the final grade is determined by the exam performance.
- B: A perfect prediction is made and the final grade is determined by the exam performance.
- C: A perfect prediction is made and the final grade is based on the prediction.
Clearly, in scenario A the student should study. You are saying that in scenario C, the rational thing to do is not studying. Therefore, you think that the rational decision differs between either A and B, or between B and C. Going from A to B, why should the existence of someone who predicts your decision (without you knowing the prediction!) affect which decision the rational one is? That the final mark is the same in B and C follows from the very definition of a "perfect prediction". Since each possible decision gives the same final mark in B and C, why should the rational decision differ?
In all three scenarios, the mapping from the set of possible decisions to the set of possible outcomes is identical -- and this mapping is arguably all you need to know in order to make the correct decision. ETA: "Possible" here means "subjectively seen as possible".
By deciding whether or not to learn, you can, from your subjective point of view, "choose" wheter you were determined to learn or not.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-12-12T08:29:59.913Z · LW(p) · GW(p)
My first diagram is scenario C and my second is scenario B. In the first diagram there is no (ETA: causal) dependence of the final mark on exam performance. I think pallas' intended scenario was more likely to be B: the mark does (ETA: causally) depend on exam performance and has been predicted. Since in B the mark depends on final performance it is necessary to study and take the exam.
In the real world, where teachers do not possess Omega's magic powers, teachers may very well be able to predict pretty much how their students will do. For that matter, the students themselves can predict how they will do, which transforms the problem into the very ordinary, non-magical one I gave at the end of my comment. If you know how well you will do on the exam, and want to do well on it, should you (i.e. is it the correct decision to) put in the work? Or for another example of topical interest, consider the effects of genes on character.
Unless you draw out the causal diagrams, Omega is just magic: an imaginary phenomenon with no moving parts. As has been observed by someone before on LessWrong, any decision theory can be defeated by suitably crafted magic: Omega fills the boxes, or whatever, in the opposite way to whatever your decision theory will conclude. Problems of that sort offer little insight into decision theory.
↑ comment by Viliam_Bur · 2013-12-10T13:00:15.509Z · LW(p) · GW(p)
You can also imagine a game where Omega predicts that those who pick a carrot out of a basket of vegetables are the ones that will die shortly of a heart attack.
Those who pick a carrot after hearing Omega's prediction, or without hearing the prediction? Those are two very different situations, and I am not sure which one you meant.
If some people even after hearing the Omega's prediction pick the carrot and then die of a heart attack, there must be something very special about them. They are suicidal, or strongly believe that Omega is wrong and want to prove it, or some other confusion.
If people who without hearing the Omega's prediction pick the carrot and die, that does not mean they would have also picked the carrot if they were warned in advance. So saying "we should also press A here" provides no actionable advice about how people should behave, because it only works for people who don't know it.
Replies from: pallas↑ comment by pallas · 2013-12-10T13:54:00.852Z · LW(p) · GW(p)
Those who pick a carrot after hearing Omega's prediction, or without hearing the prediction? Those are two very different situations, and I am not sure which one you meant.
That's a good point. I agree with you that it is crucial to keep apart those two situations. This is exactly what I was trying to address considering Newcomb's Problem and Newcomb's Soda. What do the agents (previous study-subjects) know? It seems to me that the games aren't defined precise enough.
Once we specify a game in a way that all the agents hear Omega's prediction (like in Newcomb's Problem), the prediction provides actionable advice as all the agents belong to the same reference class. If we, and we alone, know about a prediction (whereas other agents don't) the situation is different and the actionable advice is not provided anymore, at least not to the same extent.
When I propose a game where Omega predicts whether people pick carrots or not and I don't specify that this only applies to those who don't know about the prediction then I would not assume prima facie that the prediction only applies to those who don't know about the prediction. Without further specification, I would assume that it applies to "people" which is a superset of "people who know of the prediction".
↑ comment by Will_Sawin · 2013-12-11T05:52:35.054Z · LW(p) · GW(p)
We believe in the forecasting power, but we are uncertain as to what mechanism that forecasting power is taking advantage of to predict the world.
analogously, I know Omega will defeat me at Chess, but I do not know which opening move he will play.
In this case, the TDT decision depends critically on which causal mechanism underlies that forecasting power. Since we do not know, we will have to apply some principles for decision under uncertainty, which will depend on the payoffs, and on other features of the situation. The EDT decision does not. My intuitions and, I believe, the intuitions of many other commenters here, are much closer to the TDT approach than the EDT approach. Thus your examples are not very helpful to us - they lump things we would rather split, because our decisions in the sort of situation you described would depend in a fine-grained way on what causal explanations we found most plausible.
Suppose it is well-known that the wealthy in your country are more likely to adopt a certain distinctive manner of speaking due to the mysterious HavingRichParents gene. If you desire money, could you choose to have this gene by training yourself to speak in this way?
Replies from: pallas↑ comment by pallas · 2013-12-11T09:39:45.474Z · LW(p) · GW(p)
I agree that it is challenging to assign forecasting power to a study, as we're uncertain about lots of background conditions. There is forecasting power to the degree that the set A of all variables involved with previous subjects allow for predictions about the set A' of variables involved in our case. Though when we deal with Omega who is defined to make true predictions, then we need to take this forecasting power into account, no matter what the underlying mechanism is. I mean, what if Omega in Newcomb's Problem was defined to make true predictions and you don't know anything about the underlying mechanism? Wouldn't you one-box after all? Let's call Omega's prediction P and the future event F. Once Omega's prediction are defined to be true, we can denote the following logical equivalences: P(1 boxing) <--> F(1 boxing) and P(2 boxing) <--> P(2 boxing). Given this conditions, it impossible to 2-box when box B is filled with a million dollars (you could also formulate it in terms of probabilities where such an impossible event would have the probability of 0). I admit that we have to be cautious when we deal with instances that are not defined to make true predictions.
Suppose it is well-known that the wealthy in your country are more likely to adopt a certain distinctive manner of speaking due to the mysterious HavingRichParents gene. If you desire money, could you choose to have this gene by training yourself to speak in this way?
My answer depends on the specific set-up. What exactly do we mean with "It is well-known"? It doesn't seem to be a study that would describe the set A of all factors involved which we then could use to derive A' that applied to our own case. Unless we define "It is well-known" as a instance that allows for predictions in the direction A --> A', I see little reason to assume a forecasting power. Without forecasting power, screening off applies and it would be foolish to train the distinctive manner of speaking. If we specified the game in a way that there is forecasting power at work (or at least we had reason to believe so), depending on your definition of choice (I prefer one that is devoid of free will) you can or cannot choose the gene. These kind of thoughts are listed here or in the section "Newcomb’s Problem’s Problem of Free Will" in the post.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2013-12-12T04:26:54.451Z · LW(p) · GW(p)
Suppose I am deciding now whether to one-box or two-box on the problem. That's a reasonable supposition, because I am deciding now whether to one-box or two-box. There are a couple possibilities for what Omega could be doing:
- Omega observes my brain, and predicts what I am going to do accurately.
- Omega makes an inaccurate prediction, probabilistically independent from my behavior.
Omega modifies my brain to a being it knows will one-box or will two-box, then makes the corresponding prediction.
If Omega uses predictive methods that aren't 100% effective, I'll treat it as combination of case 1 and 2. If Omega uses very powerful mind-influencing technology that isn't 100% effective, I'll treat it as a combination of case 2 and 3.
In case 1 , I should decide now to one-box. In case 2, I should decide now to two-box. In case 3, it doesn't matter what I decide now.
If Omega is 100% accurate, I know for certain I am in case 1 or case 3. So I should definitely one-box. This is true even if case 1 is vanishingly unlikely.
If Omega is even 99.9% accurate, then I am in some combination of case 1, case 2, or case 3. Whether I should decide now to one-box or two-box depends on the relative probability of case 1 and case 2, ignoring case 3. So even if Omega is very accurate, ensuring that the probability of case 2 is small, if the probability of case 1 is even smaller, I should decide now to two-box.
I mean, I am describing a very specific forecasting technique that you can use to make forecasts right now. Perhaps a more precise version is, you observer children in one of two different preschools, and observe which school they are in. You observe that almost 100% of the children in one preschool end up richer than the children in the other preschool. You are then able to forecast that future children observed in preschool A will grow up to be rich, and future children observed in preschool B will grow up to be poor. You then have a child. Should you bring them to preschool A? (Here I don't mean have them attend the school. They can simply go to the building at whatever time of day the study was conducted, then leave. That is sufficient to make highly accurate predictions, after all!)
I don't really know what you mean by "the set A of all factors involved"
↑ comment by [deleted] · 2013-12-10T12:55:54.123Z · LW(p) · GW(p)
If the scenario you describe is coherent, there has to be a causal mechanism, even if you don't know what it is. If Omega is a perfect predictor, he can't predict that carrot-choosers have heart attacks unless carrot-choosers have heart attacks.
Replies from: pallas↑ comment by pallas · 2013-12-10T14:30:31.631Z · LW(p) · GW(p)
I think I agree. But I would formulate it otherwise:
i) Omega's prediction are true. ii) Omega predicts that carrot-choosers have heart attacks.
c) Therefore, carrot-choosers have heart attacks.
As soon as you accept i), c) follows if we add ii). I don't know how you define "causal mechanism". But I can imagine a possible world where no biological mechanism connects carrot-choosing with heart attacks but where "accidentally" all the carrot-choosers have heart-attacks (Let's imagine running worlds on a computer countless times. One day we might observe such a freak world). Then c) would be true without there being some sort of "causal mechanism" (as you might define it I suppose?). If you say that in such a freak world carrot-choosing and heart attacks are causally connected, then I would agree that c) can only be true if there is a underlying causal mechanism.
Replies from: None↑ comment by [deleted] · 2013-12-10T14:51:53.823Z · LW(p) · GW(p)
But in that case, when Omega tells me that if I choose carrots I'll have a heart attack, then almost certainly I'm not in a freak world, and actually Omega is wrong.
Replies from: pallas↑ comment by pallas · 2013-12-10T15:14:49.682Z · LW(p) · GW(p)
Assuming i), I would rather say that when Omega tells me that if I choose carrots I'll have a heart attack, then almost certainly I'm not in a freak world, but in a "normal" world where there is a causal mechanism (as common sense would call it). But the point stands that there is no necessity for a causal mechanism so that c) can be true and the game can be coherent. (Again, this point only stands as long as one's definition of causal mechanism excludes the freak case.)
Replies from: None↑ comment by [deleted] · 2013-12-10T16:00:50.530Z · LW(p) · GW(p)
Seems like there are two possible cases. Either: a) There is a causal mechanism b) None of the reasoning you might sensibly make actually works.
Since the reasoning only works in the causal mechanism case, the existence of the freak world case doesn't actually make any difference, so we're back to the case where we have a causal mechanism and where RichardKennaway has explained everything far better than I have.
↑ comment by V_V · 2013-12-10T15:21:22.504Z · LW(p) · GW(p)
It is important to note that this Omega scenario presumes different things than the typical one used in the Newcomb’s Problem. In the Newcomb's case, we assume that Omega is able to predict one's thought processes and decisions. That's a very strong and unusual situation, but I can imagine it. However, in the A,B-Game we assume that a specific gene makes people presented with two options choose the worse one -- please note that I have not mentioned Omega in this sentence yet! So the claim is not that Omega is able to predict something, but that the gene can determine something, even in absence of the Omega. It's no longer about Omega's superior human-predicting powers; the Omega is there merely to explain the powers of the gene.
I agree with your objection, but I think there is a way to fix this problem into a properly Newcomb-like form:
Genes GA and GB control your lifespan, but they don't directly affect your decision making processes, and cause no other observable effect.
Omega predicts in advance, maybe even before you are born, whether you will press the A button or the B button, and modifies you DNA to include gene GA if he predicts that you will press A, or GB if he predicts that you will press B.
This creates a positive correlation between GA and A, and between GB and B, without any direct causal relation.
In this scenario, evidential decision theory generally ( * ) chooses A.
The same fix can be applied to the Solomon's problem, Newcomb's soda, and similar problems. In all these cases, EDT chooses the "one-box" option, which is the correct answer by the same reasoning which can be used to show that one-boxing is the correct choice ( * ) for the standard Newcomb's problem.
I think that allowing the hidden variable to directly affect the agent's decision making process, as the original version of the problems above do, makes them ill-specified:
If the agent uses EDT, then the only way a hidden variable can affect their decision making process is by affecting their preferences, but then the "tickle defence" becomes valid: since the agent can observe their preferences, this screens off the hidden variable from any evidence gathered by performing an action, therefore EDT chooses the "two-box" option.
If the hidden variable, other other hand, affects whether the agent uses EDT or some corrupted version of it, then, the question "what should EDT do?" becomes potentially ill-posed: EDT presumes unbounded rationality, does that mean that EDT knows that it is EDT, or is it allowed to have uncertainty about it? I don't know the answer, but I smell a self-referential paradox lurking there.
( *
One-boxing is trivially the optimal solution to Newcomb's problem if Omega has perfect prediction accuracy.
However, if Omega's probability of error is epsilon > 0, even for a very small epsilon, telling the optimal solution isn't so trivial: it depends on the stochastic independence assumptions you make. )
↑ comment by Pentashagon · 2013-12-23T20:10:42.713Z · LW(p) · GW(p)
Another possibility is that Omega presents the choice to very few people with the G_B gene in the first place; only to the ones who for some reason happen to choose B as Omega predicts. Likewise, Omega can't present the choice to every G_A carrier because some may select B.
comment by Jiro · 2013-12-10T22:06:43.940Z · LW(p) · GW(p)
Here's another variation: Newcomb's problem is as is usually presented, including Omega being able to predict what box you would take and putting money in the boxes accordingly--except in this case the boxes are transparent. Furthermore, you think Omega is a little snooty and would like to take him down a peg. You value this at more than $1000. What do you do?
Obviously, if you see $1000 and $1M, you pick both boxes because that is good both from the monetary and anti-Omega perspective. If you see $1000 and $0, the anti-Omega perspective rules and you pick one box.
Unfortunately, Omega always predicts correctly. So if you picked both boxes, the boxes contain $1000 and $0, while if you picked one box, they contain $1000 and $1M. But that contradicts what I said you just did....
(In fact the regular version of the problem is subject to this too. Execute the strategy "predict what Omega did to the boxes and make the opposite choice". Having transparent boxes just gives you 100% accuracy in your prediction.)
Replies from: nshepperd, ArisKatsaris, SilentCal, CronoDAS↑ comment by nshepperd · 2013-12-13T09:45:54.973Z · LW(p) · GW(p)
This version of Transparent Newcomb is ill-defined, because Omega's decision process is not well-specified. If you do a different thing depend on what money is in the boxes, there's no unique correct prediction. Normally, Transparent Newcomb involves Omega predicting what you would do with the large box set to either empty or full (the "two arms" of Transparent Newcomb).
Also, I don't think "predict what Omega did to the boxes and make the opposite choice" is much of a problem either. You can't simultaneously be perfect predictors of each other, because that would let you predict yourself, etc etc
Replies from: Jiro↑ comment by Jiro · 2013-12-14T05:04:47.098Z · LW(p) · GW(p)
Omega's decision process is as well-specified as it is in the non-transparent version: Omega predicts your choice of boxes and uses the result of that prediction to decide what to put in the boxes.
You can't simultaneously be perfect predictors of each other, because that would let you predict yourself, etc etc
Yes, of course, but you can't be an imperfect predictor either, unless you're imperfect in a very specific way. Imagine that there's a 25% chance you correctly predict what Omega does--in that case, Omega still can't be a perfect predictor. The only real difference between the transparent and nontransparent versions (if you still like taking Omega down a peg) is that the transparent version guarantees that you can correctly "predict" what Omega did.
Replies from: ArisKatsaris, nshepperd↑ comment by ArisKatsaris · 2013-12-14T14:20:31.863Z · LW(p) · GW(p)
25% chance you correctly predict what Omega does
A flipped coin has a 50% chance to correctly predict what Omega does, if Omega is allowed only two courses of action.
↑ comment by nshepperd · 2013-12-14T05:43:57.210Z · LW(p) · GW(p)
Omega's decision process is as well-specified as it is in the non-transparent version: Omega predicts your choice of boxes and uses the result of that prediction to decide what to put in the boxes.
If your choice of boxes depends on what you observe, he needs to decide whether you see an empty box or a full box before he can predict what you'll do. The non-transparent version does not have this problem.
Replies from: EHeller↑ comment by EHeller · 2013-12-14T06:18:38.300Z · LW(p) · GW(p)
If your choice of boxes depends on what you observe, he needs to decide whether you see an empty box or a full box before he can predict what you'll do. The non-transparent version does not have this problem.
But we can still break it in similar ways. Pre-commit to flipping a coin (or some other random variable) to make your choice, and Omega can't be a perfect predictor, which breaks the specification of the problem.
Replies from: ArisKatsaris, ialdabaoth↑ comment by ArisKatsaris · 2013-12-14T14:26:29.552Z · LW(p) · GW(p)
These are all trivial objections. In the same manner you can "break the problem" by saying "well, what if the players chooses to burn both boxes?" "What if the player walks away?" "What if the player recites Vogon poetry and then shoots himself in the head without taking any of the boxes?".
Player walks in the room, recites Vogon poetry, and then shoots themselves in the head.
We then open Box A. Inside we see a note that says "I predict that the player will walk in the room, recite Vogon poetry and then shoot themselves in the head without taking any of the boxes".
These objections don't really illuminate anything about the problem. There's nothing inconsistent about Omega predicting you're going to do any of these things, and having different contents in the box prefilled according to said prediction. That the original phrasing of the problem doesn't list all of the various possibilities is really again just a silly meaningless objection.
Replies from: EHeller↑ comment by EHeller · 2013-12-14T16:09:19.988Z · LW(p) · GW(p)
Your objections are of a different character. Any of these
In the same manner you can "break the problem" by saying "well, what if the players chooses to burn both boxes?" "What if the player walks away?" "What if the player recites Vogon poetry and then shoots himself in the head without taking any of the boxes?"
involve not picking boxes. The issue with the coin flip is to point out that there are algorithms for box picking that are unpredictable. There are methods of picking that make it impossible for Omega to have perfect accuracy. Whether or not Newcomb is coherent depends on your model of how people make choices, and how noisy that process is.
↑ comment by ialdabaoth · 2013-12-14T16:11:02.178Z · LW(p) · GW(p)
But we can still break it in similar ways. Pre-commit to flipping a coin (or some other random variable) to make your choice, and Omega can't be a perfect predictor, which breaks the specification of the problem.
The premise of the thought experiment is that Omega has come to you and said, "I have two boxes here, and know whether you are going to open one box or two boxes, and thus have filled the boxes accordingly".
If Omega knows enough to predict whether you'll one-box or two-box, then Omega knows enough to predict whether you're going to flip a coin, do a dance, kill yourself, or otherwise break that premise. Since the frame story is that the premise holds, then clearly Omega has predicted that you will either one-box or two-box.
Therefore, this Omega doesn't play this game with people who do something silly instead of one-boxing or two-boxing. Maybe it just ignores those people. Maybe it plays another game. But the point is, if we have the narrative power to stipulate an Omega that plays the "one box or two" game accurately, then we have the narrative power to stipulate an Omega that doesn't bother playing it with people who are going to break the premise of the thought experiment.
In programmer-speak, we would say that Omega's behavior is undefined in these circumstances, and it is legal for Omega to make demons fly out of your nose in response to such cleverness.
Replies from: EHeller↑ comment by EHeller · 2013-12-14T16:21:33.231Z · LW(p) · GW(p)
Therefore, this Omega doesn't play this game with people who do something silly instead of one-boxing or two-boxing
Flipping a coin IS one boxing Or two boxing! Its just not doing it PREDICTABLY.
Replies from: ialdabaoth↑ comment by ialdabaoth · 2013-12-14T16:28:35.717Z · LW(p) · GW(p)
ಠ_ಠ
EDIT: Okay, I'll engage.
Either Omega has perfect predictive power over minds AND coins, or it doesn't.
If it has perfect predictive power over minds AND coins, then it knows which way the flip will go, and what you're really saying is "give me a 50/50 gamble with a net payoff of $500,500", instead of $1,000,000 OR $1,000 - in which case you are not a rational actor and Newcomb's Omega has no reason to want to play the game with you.
If it only has predictive power over minds, then neither it nor you know which way the flip will go, and the premise is broken. Since you accepted the premise when you said "if Omega shows up, I would...", then you must not be the sort of person who would pre-commit to an unpredictable coinflip, and you're just trying to signal cleverness by breaking the thought experiment on a bogus technicality.
Please don't do that.
Replies from: EHeller↑ comment by EHeller · 2013-12-14T16:55:08.900Z · LW(p) · GW(p)
Since you accepted the premise when you said "if Omega shows up, I would...", then you must not be the sort of person who would pre-commit to an unpredictable coinflip, and you're just trying to signal cleverness by breaking the thought experiment on a bogus technicality.
Its not breaking the thought experiment on a "bogus technicality" its pointing out that the thought experiment is only coherent if we make some pretty significant assumptions about how people make decisions. The more noisy we believe human decision making is, the less perfect omega can be.
The paradox still raises the same point for decisions algorithms, but the coin flip underscores that the problem can be ill-defined for decisions algorithms that incorporate noisy inputs.
↑ comment by ArisKatsaris · 2013-12-13T11:39:02.545Z · LW(p) · GW(p)
The more well-specified version of Transparent Newcomb says that Omega only puts $1M in the box if he predicts you will one-box regardless of what you see.
In that version, there's no paradox: anyone that goes in with the mentality you describe will end up seeing $1000 and $0. Their predictable decision of "change my choice based on what I see" is what will have caused this, and it fulfills Omega's prediction.
Replies from: Jiro↑ comment by Jiro · 2013-12-14T04:57:56.982Z · LW(p) · GW(p)
That's not transparent Newcomb, that's transparent Newcomb modified to take out the point I was trying to use it to illustrate.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2013-12-14T14:18:50.720Z · LW(p) · GW(p)
I'm not sure there remains a point to illustrate: if Omega doesn't predict a player who alters their choice based on what they see, then it's not a very predictive Omega at all.
It's likewise not a very predictive Omega if it doesn't predict the possibility of a player flipping a quantum coin to determine the numbers of boxes to take. That problems can work also for the non-transparent version. (the variation generally used then is again that if the player chooses to use quantum randomness, Omega leaves the opaque box empty. And possibly also kills a puppy :-)
Replies from: Jiro↑ comment by Jiro · 2013-12-15T07:03:39.545Z · LW(p) · GW(p)
Although some people are mentioning flipping a coin or its equivalent, I didn't. It's too easy to say that we are only postulating that Omega can predict your algorithm and that of course he couldn't predict an external source of randomness.
The point of the transparent version is to illustrate that even without an external source of information, you can run into a paradox--Omega is trying to predict you, but you may be trying to predict Omega as well, in which case predicting what you do may be undecideable for Omega--he can't even in principle predict what you do, no matter how good he is. Making the boxes transparent is just a way to bypass the inevitable objection of "how can you, a mere human, hope to predict Omega?" by creating a situation where predicting Omega is 100% guaranteed.
↑ comment by SilentCal · 2013-12-11T00:36:33.230Z · LW(p) · GW(p)
Thank you for simply illustrating how easily the assumption of accurate predictions can contradict the assumption that we can choose our decision algorithm.
I would steelman the OP by saying that you should precommit to the above strategy if for some reason you want to avoid playing this version of Newcomb's problem, since this attitude guarantees that you won't.
↑ comment by CronoDAS · 2013-12-13T08:16:57.305Z · LW(p) · GW(p)
I would like to draw further attention to this. Assuming Omega to be a perfect predictor opens the door to all kinds of logical contradictions along the lines of "I'm going to do the opposite of the prediction, regardless of what it happens to be."
comment by [deleted] · 2013-12-10T17:09:18.140Z · LW(p) · GW(p)
Can you edit the post to include a "Continue reading..." cut rather than having the whole thing take up a dozen screenfuls of main page?
comment by Richard_Kennaway · 2013-12-10T12:01:45.138Z · LW(p) · GW(p)
TL;DR, but is this any more than yet another attempt to do causal reasoning while calling it evidential reasoning?
Replies from: pallas↑ comment by pallas · 2013-12-10T12:11:21.386Z · LW(p) · GW(p)
I think it is more an attempt to show that a proper use of updating results in evidential reasoners giving the right answers in Newcomblike problems. Further more, it is an attempt to show that the medical version of Solomon's Problem and Newcomb's Soda aren't defined precise enough since it is not clear what the study-subjects were aware of. Another part tries to show that people get confused when thinking about Newcomb's Problem because they use a dated perception of time as well as a problematic notion of free will.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2013-12-10T16:37:57.081Z · LW(p) · GW(p)
(Slightly tangential): my offer to any user of EDT to correctly solve the HAART problem using EDT (here: http://lesswrong.com/lw/hwq/evidential_decision_theory_selection_bias_and/9c2t) remains open.
My claim is that there is no way to correctly recover the entirety of causal inference problems using EDT without being isomorphic to CDT.
So (if you believe my claim) -- why even work with a busted decision theory?
Replies from: pallas↑ comment by pallas · 2013-12-10T17:20:47.157Z · LW(p) · GW(p)
In the post you can read that I am not endorsing plain EDT, as it seems to lose in problems like parfit's hitchhiker or counterfactual mugging. But in other games, for instance in Newcomblike problems, the fundamental trait of evidential reasoning seems to give the right answers (as long as one knows how to apply conditional independencies!). It sounds to me like a straw man that EDT should give foolish answers (pass on crucial issues like screening off so that for instance we should waste money to make it more likely that we are rich if we don't know our bank balance) or otherwise be isomorphic with CDT. I think that a proper use of evidential reasoning leads to one-boxing in Newcomb's Problem, chewing gum in the medical version of Solomon's Problem (assuming that previous subjects knew nothing about any interdepence of the chewing gum and throat abscess, otherwise it might change) and pick chocolate ice cream in Newcomb's Soda (except one would specify that we were the only study-subjects who know about the interdepence of sodas and ice creams). CDT, on the other hand, seems to imply a buggy fatalistic component. It doesn't suggest investing resources that make a more desirable past more likely. (I admit that this is counterintuitive, but give it a chance and read the section "Newcomb's Problem's Problem of Free Will" as it might be game-changing if it turns out to be true.)
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2013-12-10T17:24:35.874Z · LW(p) · GW(p)
I am just saying, fix CDT, not EDT. I claim EDT is irrepairably broken on far less exotic problems than Parfit's hitchhiker. Problems like "should I give drugs to patients based on the results of this observational study?" The reason I think this is I can construct arbitrarily complicated causal graphs where getting the right answer entails having a procedure that is "causal inference"-complete, and I don't think anyone who uses EDT is anywhere near there (and if they are .. they are just reinventing CDT with a different language, which seems silly).
I am not strawmanning EDT, I am happy to be proven wrong by any EDT adherent and update accordingly (hence my challenge). For example, I spent some time with Paul Christiano et al back at the workshop trying to get a satisfactory answer out of EDT, and we didn't really succeed (although to be fair, that was a tangent to the main thrust of that workshop, so we didn't really spend too much time on this).
Replies from: pallas, pallas↑ comment by pallas · 2013-12-10T18:42:21.017Z · LW(p) · GW(p)
I claim EDT is irrepairably broken on far less exotic problems than Parfit's hitchhiker. Problems like "should I give drugs to patients based on the results of this observational study?"
This seems to be a matter of screening off. Once we don't prescribe drugs because of evidential reasoning we don't learn anything new about the health of the patient. I would only not prescripe the drug if a credible instance with forecasting power (for instance Omega) shows to me that generally healthy patients (who show suspicious symptoms) go to doctors who endorse evidential reasoning and unhealthy patients go to conventional causal doctors. This sounds counterintuitive, but structurally it is equal to Newcomb's Problem: The patient corresponds to the box, we know it already "has" a specific value, but we don't know it yet. Choosing only box B (or not to give the drug) would be the option that is only compatible with the more desirable past where Omega has put the million into the box (or where the patient has been healthy all along).
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2013-12-10T21:59:05.235Z · LW(p) · GW(p)
Look, this is all too theoretical for me. Can you please go and actually read my example and tell me what your decision rule is for giving the drug?
There is more to this than d-separation. D-separation is just a visual rule for the way in which conditional independence works in certain kinds of graphical models. There is not enough conceptual weight in the d-separation idea alone to handle decision theory.
Replies from: pallas↑ comment by pallas · 2013-12-12T11:58:22.449Z · LW(p) · GW(p)
Look, HIV patients who get HAART die more often (because people who get HAART are already very sick). We don't get to see the health status confounder because we don't get to observe everything we want. Given this, is HAART in fact killing people, or not?
It is not that clear to me what we know about HAART in this game. For instance, in case we know nothing about it and we only observe logical equivalences (in fact rather probabilistic tendencies) in the form "HAART" <--> "Patient dies (within a specified time interval)" and "no HAART" <--> "Patient survives" it wouldn't be irrational to reject the treatment.
Once we know more about HAART, for instance, that the probabilistic tendencies were due to unknowingly comparing sick people to healthy people, we then can figure out that P( patient survives | sick, HAART) > P (patient survives | sick, no HAART) and that P( patient survives | healthy, HAART)< P(patient survives | healthy, no HAART). Knowing that much, choosing not to give the drug would be a foolish thing to do.
If we come to know that a particular reasoning R leads to not prescribing the drug (even after the update above) is very strongly correlated with having patients that are completely healthy but show false-positive clinical test results, then not prescribing the drug would be the better thing to do. This, of course, would require that this new piece of information brings about true predictions about future cases (which makes the scenario quite unlikely, though considering the theoretical debate it might be relevant).
Generally, I think that drawing causal diagrams is a very useful heuristic in "everyday science", since replacing the term causality with all the conditionals involved might be confusing. Maybe this is a reason why some people tend to think that evidential reasoning is defined to only consider plain conditionals (in this example P(survival| HAART)) but not more background data. Because otherwise, in effortful ways you could receive the same answer as causal reasoners do but what would be the point of imitating CDT?
I think it is exactly the other way round. It's all about conditionals. It seems to me that a bayesian writes down "causal connection" in his/her map after updating on sophisticated sets of correlations. It seems impossible to completely rule out confounding at any place. Since evidential reasoning would suggest not to prescribe the drug in the false-positive scenario above its output is not similiar to the one conventional CDT produces. Differences between CDT and the non-naive evidential approach are described here as well: http://lesswrong.com/lw/j5j/chocolate_ice_cream_after_all/a6lh
It seems that CDT-supporters only do A if there is a causal mechanism connecting it with the desirable outcome B. An evidential reasoner would also do A if he knew that there would be no causal mechanism connecting it to B, but a true (but purely correlative) prediction stating the logical equivalences A<-->B and ~A <--> ~B.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2013-12-12T12:09:35.669Z · LW(p) · GW(p)
Ok. So what is your answer to this problem:
"A set of 100 HIV patients are randomized to receive HAART at time 0. Some time passes, and their vitals are measured at time 1. Based on this measurement some patients receive HAART at time 1 (some of these received HAART at time 0, and some did not). Some more time passes, and some patients die at time 2. Some of those that die at time 2 had HAART at both times, or at one time, or at no time. You have a set of records that show you, for each patient of 100, whether they got HAART at time 0 (call this variable A0), whether they got HAART at time 1 (call this variable A1), what their vitals were at time 1 (call this variable W), and whether they died or not at time 2 (call this variable Y). A new patient comes in, from the same population as the original 100. You want to determine how much HAART to give him. That is, should {A0,A1} be set to yes,yes; yes,no; no,yes; or no,no. Your utility function rewards you for keeping patients alive. What is your decision rule for prescribing HAART for this patient?"
From the point of view of EDT, the set of records containing values of A0,W,A1,Y for 100 patients is all you get to see. (Someone using CDT would get a bit more information than this, but this isn't relevant for EDT). I can tell you that based on the records you see, p(Y=death | A0=yes,A1=yes) is higher than p(Y=death | A0=no,A1=no). I am also happy to answer any additional questions you may have about p(A0,W,A1,Y). This is a concrete problem with a correct answer. What is it?
Replies from: nshepperd↑ comment by nshepperd · 2013-12-12T12:36:17.272Z · LW(p) · GW(p)
I don't understand why you persist in blindly converting historical records into subjective probabilities, as though there was no inference to be done. You can't just set p(Y=death | A0=yes,A1=yes) to the proportion of deaths in the data, because that throws away all the highly pertinent information you have about biology and the selection rule for "when was the treatment applied". (EDIT: ignoring the covariate W would cause Simpson's Paradox in this instance)
EDIT EDIT: Yes, P(Y = death in a randomly-selected line of the data | A0=yes,A1=yes in the same line of data)
is equal to the proportion of deaths in the data, but that's not remotely the same thing as P(this patient dies | I set A0=yes,A1=yes for this patient)
.
↑ comment by IlyaShpitser · 2013-12-12T12:52:15.693Z · LW(p) · GW(p)
I was just pointing out that in the conditional distribution p(Y|A0,A1) derived from the empirical distribution some facts happen to hold that might be relevant. I never said what I am ignoring, I was merely posing a decision problem for EDT to solve.
The only information about biology you have is the 100 records for A0,W,A1,Y that I specified. You can't ask for more info, because there is no more info. You have to decide with what you have.
Replies from: nshepperd↑ comment by nshepperd · 2013-12-12T13:15:40.413Z · LW(p) · GW(p)
The information about biology I was thinking of is things like "vital signs tend to be correlated with internal health" and "people with bad internal health tend to die". Information it would be irresponsible to not use.
But anyway, the solution is to calculate P(this patient dies | I set A0=a0,A1=a1 for this patient, data)
(I should have included the conditioning on data
above but I forgot) by whatever statistical methods are relevant, then to do whichever option of a0,a1 gives the higher number. Straightforward.
You can approximate P(this patient dies | I set A0=a0,A1=a1 for this patient, data)
with P_empirical(Y=death | do(A0=a0,A1=a1))
from the data, on the assumption that our decision process is independent of W (which is reasonable, since we don't measure W). There are other ways to calculate P(this patient dies | I set A0=a0,A1=a1 for this patient, data)
, like Solomonoff induction, presumably, but who would bother with that?
↑ comment by IlyaShpitser · 2013-12-12T13:39:49.078Z · LW(p) · GW(p)
P_empirical(Y=death | do(A0=a0,A1=a1))
I agree with you broadly, but this is not the EDT solution, is it? Show me a definition of EDT in any textbook (or Wikipedia, or anywhere) that talks about do(.).
Yes, P(Y = death in a randomly-selected line of the data | A0=yes,A1=yes in the same line of data) is equal to the proportion of deaths in the data, but that's not remotely the same thing as P(this patient dies | I set A0=yes,A1=yes for this patient).
Yes, of course not. That is the point of this example! I was pointing out that facts about p(Y | A0,A1) aren't what we want here. Figuring out the distribution that is relevant is not so easy, and cannot be done merely from knowing p(A0,W,A1,Y).
Replies from: nshepperd↑ comment by nshepperd · 2013-12-12T13:50:30.132Z · LW(p) · GW(p)
No, this is the EDT solution.
EDT uses P(this patient dies | I set A0=a0,A1=a1 for this patient, data)
while CDT uses P(this patient dies | do(I set A0=a0,A1=a1 for this patient), data)
.
EDT doesn't "talk about do
" because P(this patient dies | I set A0=a0,A1=a1 for this patient, data)
doesn't involve do
. It just happens that you can usually approximate P(this patient dies | I set A0=a0,A1=a1 for this patient, data)
by using do
(because the conditions for your personal actions are independent of whatever the conditions for the treatment in the data were).
Let me be clear: the use of do
I describe here is not part of the definition of EDT. It is simply an epistemic "trick" for calculating P(this patient dies | I set A0=a0,A1=a1 for this patient, data)
, and would be correct even if you just wanted to know the probability, without intending to apply any particular decision theory or take any action at all.
Also, CDT can seem a bit magical, because when you use P(this patient dies | do(I set A0=a0,A1=a1 for this patient), data)
, you can blindly set the causal graph for your personal decision to the empirical causal graph for your data set, because the do
operator gets rid of all the (factually incorrect) correlations between your action and variables like W.
↑ comment by IlyaShpitser · 2013-12-12T13:54:22.216Z · LW(p) · GW(p)
[ I did not downvote, btw. ]
Criticisms section in the Wikipedia article on EDT :
David Lewis has characterized evidential decision theory as promoting "an irrational policy of managing the news".[2] James M. Joyce asserted, "Rational agents choose acts on the basis of their causal efficacy, not their auspiciousness; they act to bring about good results even when doing so might betoken bad news."[3]
Where in the wikipedia EDT article is the reference to "I set"? Or in any text book? Where are you getting your EDT procedure from? Can you show me a reference? EDT is about conditional expectations, not about "I set."
One last question: what is P(this patient dies | I set A0=a0,A1=a1 for this patient, data) as a function of P(Y,A0,W,A1)? If you say "whatever p_empirical(Y | do(A0,A1)) is", then you are a causal decision theorist, by definition.
Replies from: nshepperd↑ comment by nshepperd · 2013-12-12T14:30:20.103Z · LW(p) · GW(p)
I don't strongly recall when I last read a textbook on decision theory, but I remember that it described agents using probabilities about the choices available in their own personal situation, not distributions describing historical data.
Pragmatically, when you build a robot to carry out actions according to some decision theory, the process is centered around the robot knowing where it is in the world, and making decisions with the awareness that it is making the decisions, not someone else. The only actions you have to choose are "I do this" or "I do that".
I would submit that a CDT robot makes decisions on the basis of P(outcome | do(I do this or that), sensor data)
while a hypothetical EDT robot would make decisions based on P(outcome | I do this or that, sensor data)
. How P(outcome | I do this or that, sensor data)
is computed is a matter of personal epistemic taste, and nothing for a decision theory to have any say about.
(It might be argued that I am steel-manning the normal description of EDT, since most people talking about it seem to make the error of blindly using distributions describing historical data as P(outcome | I do this or that, sensor data)
, to the point where that got incorporated into the definition. In which case maybe I should be writing about my "new" alternative to CDT in philosophy journals.)
↑ comment by IlyaShpitser · 2013-12-12T17:06:20.511Z · LW(p) · GW(p)
I think you steel-manned EDT so well, that you transformed it into CDT, which is a fairly reasonable decision theory in a world without counterfactually linked decisions.
I mean Pearl invented/popularized do(.) in the 1990s sometime. What do you suppose EDT did before do(.) was invented? Saying "ah, p(y | do(x)) is what we meant all along" after someone does the hard work to invent the theory for p(y | do(x)) doesn't get you any points!
Replies from: nshepperd↑ comment by nshepperd · 2013-12-12T17:13:36.195Z · LW(p) · GW(p)
I disagree. The calculation of P(outcome | I do this or that, sensor data)
does not require any use of do
when there are no confounding covariates, and in the case of problems such as Newcomb's, you get a different answer to CDT's P(outcome | do(I do this or that), sensor data)
— the CDT solution throws away the information about Omega's prediction.
CDT isn't a catch-all term for "any calculation that might sometimes involve use of do
", it's a specific decision theory that requires you to use P(outcome | do(action), data)
for each of the available actions, whether or not that throws away useful information about correlations between yourself and stuff in the past.
EDIT: Obviously, before do()
was invented, if you were using EDT you would do what everyone else would do: throw up your hands and say "I can't calculate P(outcome | I do this or that, sensor data)
; I don't know how to deal with these covariates!". Unless there weren't any, in which case you just go ahead and estimate your P from the data. I've already explained that the use of do()
is only an inference tool.
↑ comment by IlyaShpitser · 2013-12-12T20:25:57.391Z · LW(p) · GW(p)
I think you still don't get it. The word "confounder" is causal. In order to define what a "confounding covarite" means, vs a "non-confounding covariate" you need to already have a causal model. I have a paper in Annals on this topic with someone, actually, because it is not so simple.
So the very statement of "EDT is fine without confounders" doesn't even make sense within the EDT framework. EDT uses the framework of "probability theory." Only statements expressible within probability theory are allowed. Personally, I think it is in very poor taste to silently adopt all the nice machinery causal folks have developed, but not acknowledge that the ontological character of the resulting decision theory is completely different from the terrible state it was before.
Incidentally the reason CDT fails on Newcomb, etc. is the same -- it lacks the language powerful enough to talk about counterfactually linked decisions, similarly to how EDT lacks the language to talk about confounding. Note : this is an ontological issue not an algorithmic issue. That is, it's not that EDT doesn't handle confounders properly, it's that it doesn't even have confounders in its universe of discourse. Similarly, CDT only has standard non-linked interventions, and so has no way to even talk about Newcomb's problem.
The right answer here is to extend the language of CDT (which is what TDT et al essentially does).
Replies from: nshepperd↑ comment by nshepperd · 2013-12-13T01:34:28.017Z · LW(p) · GW(p)
I'm aware that the "confounding covariates" is a causal notion. CDT does not have a monopoly on certain kinds of mathematics. That would be like saying "you're not allowed to use the Pythagorean theorem when you calculate your probabilities, this is EDT, not Pythagorean Decision Theory".
Do you disagree with my statement that EDT uses P(outcome | I do X, data)
while CDT uses P(outcome | do(I do X), data)
? If so, where?
So the very statement of "EDT is fine without confounders" doesn't even make sense within the EDT framework. EDT uses the framework of "probability theory."
Are you saying it's impossible to write a paper that uses causal analysis to answer the purely epistemic question of whether a certain drug has an effect on cancer, without invoking causal decision theory, even if you have no intention of making an "intervention", and don't write down a utility function at any point?
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2013-12-14T03:22:51.647Z · LW(p) · GW(p)
I am simultaneously having a conversation with someone who doesn't see why interventions cannot be modeled using conditional probabilities, and someone who doesn't see why evidential decision theory can't just use interventions for calculating what the right thing to do is.
Let it never be said that LW has a groupthink problem!
CDT does not have a monopoly on certain kinds of mathematics.
Yes, actually it does. If you use causal calculus, you are either using CDT or an extension of CDT. That's what CDT means.
P(outcome | I do X, data)
I don't know what the event 'I do X" is for you. If it satisfies the standard axioms of do(x) (consistency, effectiveness, etc.) then you are just using a different syntax for causal decision theory. If it doesn't satisfy the standard axioms of do(x) it will give the wrong answers.
Are you saying it's impossible to write a paper that uses causal analysis to answer the purely epistemic question of whether a certain drug has an effect on cancer
Papers on effects of treatments in medicine are either almost universally written using Neyman's potential outcome framework (which is just another syntax for do(.)), or they don't bother with special causal syntax because they did an RCT directly (in which case a standard statistical model has a causal interpretation).
Replies from: Sniffnoy, nshepperd↑ comment by nshepperd · 2013-12-14T04:11:53.572Z · LW(p) · GW(p)
I don't know what the event 'I do X" is for you.
"I do X" literally means the event where the agent (the one deciding upon a decision) takes the action X. I say "I do X" to distinguish this from "some agent in a data set did X", because even without talking about causality, these are obviously different things.
The way you are talking about axioms and treating X as a fundamental entity suggests our disagreement is about the domain on which probability is being applied here. You seem to be conceiving of everything as referring to the empirical causal graph inferred from the data, in which case "X" can be considered to be synonymous to "an agent in the dataset did X".
"Reflective" decision theories like TDT, and my favoured interpretation of EDT require you to be able to talk about the agent itself, and infer a causal graph (although EDT, being "evidential", doesn't really need a causal graph, only a probability distribution) describing the causes and consequences of the agent taking their action. The inferred causal graph need not have any straightforward connection to the empirical distribution of the dataset. Hence my talk of P
as opposed to P_empirical
.
So, to summarize ,"I do X" is not a operator, causal or otherwise, applied to the event X in the empirical causal graph. It is an event in an entirely separate causal graph describing the agent. Does that make sense?
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2013-12-14T09:31:21.588Z · LW(p) · GW(p)
Fine, but then:
(a) You are actually using causal graphs. Show me a single accepted definition of evidential decision theory that allows you to do that (or more precisely that defines, as a part of its decision rule definition, what a causal graph is).
(b) You have to somehow be able to make decisions in the real world. What sort of data do you need to be able to apply your decision rule, and what is the algorithm that gives you your rule given this data?
Replies from: nshepperd↑ comment by nshepperd · 2013-12-14T13:42:10.767Z · LW(p) · GW(p)
(a) Well, you don't really a need a causal graph; a probability distribution for the agent's situation will do. Although it might be convenient to represent it as a causal graph. Where I have described the use of causal graphs above, they are merely a component of the reasoning used to infer your probability distribution within probability theory.
(b) That is, a set of hypotheses you might consider would include G = "the phenomenon I am looking at behaves in a manner described by graph G". Then you calculate the posterior probability P(G | data)
× the joint distribution over the variables of the agent's situation given G, and integrate over G to get the posterior distribution for the agent's situation.
Given that, you decide what to do based on expected utility with P(outcome | action, data)
. Obviously, the above calculation is highly nontrivial. In principle you could just use some universal prior (ie. Solomonoff induction) to calculate the posterior distribution for the agent instead, but that's even less practical.
In practice you can often approximate this whole process fairly well by assuming the only difference between our situation and the data to be that our decision is uncorrelated with whatever decision procedure was used in the data, and treating it as an "intervention" (which I think might correspond to just using the most likely G, and ignoring all other hypotheses).
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2013-12-14T13:53:32.551Z · LW(p) · GW(p)
(a) Well, you don't really a need a causal graph; a probability distribution for the agent's situation will do. Although it might be convenient to represent it as a causal graph. Where I have described the use of causal graphs above, they are merely a component of the reasoning used to infer your probability distribution within probability theory.
Well, you have two problems here. The first (and bigger) problem is you are committing an ontological error (probability distributions are not about causality but about uncertainty. It doesn't matter if you are B or F about it). The second (smaller, but still significant) problem is that probability distributions by themselves do not contain the information that you want. In other words, you don't get identifiability of the causal effect in general if all you are given is a probability distribution. To use a metaphor Judea likes to use, if you have a complete surface description of how light reflection works on an item (say a cup), you can construct a computer graphics engine that can render the cup from any angle. But there is no information on how the cup is to be rendered under deformation (that is, if I smash the cup on the table, what will it look like?)
Observed joint probability distributions -- surface information, interventional distributions -- information after deformations. It might be informative to consider how your (Bayesian?) procedure would work in the cup example. The analogy is almost exact, the set of interventional densities is a much bigger set than the set of observed joint distributions.
I would be very interested in what you think the right decision rule is for my 5 node HAART example. In my example you don't have to average over possible graphs, because my hypothetical is that we know what the correct graph is (and what the correct corresponding distribution is).
Presumably your answer will take the form of either [decision rule given some joint probability distribution that does not mention any causal language] or "not enough information for an answer."
If your answer is the latter, your decision theory is not very good. If the former (and by some miracle the decision rule gives the right answer), I would be very interested in a (top level?) post that works out how you recover the correct properties of causal graphs from just probability distributions. If correct, you could easily publish this in any top statistics journal and revolutionize the field. My intuition is that 100 years of statistics is not in fact wrong, and as you start dealing with more and more complex problems (I can generate an inexhaustible list of these), there will come up a lot of "gotchas" that causal folks already dealt with. In order to deal with these "gotchas" you will have to modify and modify your proposal until effectively you just reinvent intervention calculus.
Replies from: V_V, nshepperd↑ comment by V_V · 2013-12-15T16:05:55.375Z · LW(p) · GW(p)
I would be very interested in what you think the right decision rule is for my 5 node HAART example. In my example you don't have to average over possible graphs, because my hypothetical is that we know what the correct graph is (and what the correct corresponding distribution is).
Your graph describes the data generation stochastic process. The agent needs a different one to model the situation it is facing. If it uses the right graph (or more generally, the right joint probability distribution, which doesn't have to be factorizable), then it will get the right answer.
How to go from a set of data and a model of the data generation process to a model of the agent situation process is, of course, a non trivial problem, but it is not part of the agent decision problem.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2013-12-17T18:29:13.270Z · LW(p) · GW(p)
Ok, these problems I am posing are not abstract, they are concrete problems in medical decision making. In light of http://lesswrong.com/lw/jco/examples_in_mathematics/, I am going to pose 4 of them, right here, then tell you what the right answer is, and what assumptions I used to get this answer. Whatever decision theory you are using needs to be able to correctly represent and solve these problems, using at most the information that my solutions use, or it is not a very good decision theory (in the sense that there exist known alternatives that do solve these problems correctly). In all problems our utility penalizes patient deaths, or patients getting a disease.
In particular, if you are a user of EDT, you need to give me a concrete algorithm that correctly solves all these problems, without using any causal vocabulary. You can't just beg off on how it's a "non trivial problem." These are decision problems, and there exist solutions for these problems now, using CDT! Can EDT solve them or not? I have yet to see anyone try to seriously engage these (with the notable exception of Paul, who to his credit did try to give a Bayesian/non-causal account of problem 3, but ran out of time).
Note : I am assuming the correct graph, and lots of samples, so the effect of a prior is not significant (and thus talk about empirical frequencies, e.g. p(c)). If we wanted to, we could do a Bayesian problem over possible causal graphs, with a prior, and/or a Bayesian problem for estimation, where we could talk about, for example, the posterior distribution of case histories C. I skipped all that to simplify the examples.
Problem 1:
We perform a randomized control trial for a drug (half the patients get the drug, half the patients do not). Some of the patients die (in both groups). Let A be a random variable representing whether the patient in our RCT dataset got the drug, and Y be a random variable representing whether the patient in our RCT dataset died. A new patient comes in which is from the same cohort as those in our RCT. Should we give them the drug?
Solution: Give the drug if and only if E[Y = yes | A = yes] < E[Y = yes | A = no].
Intuition for why this is correct: since we randomized the drug, there are no possible confounders between drug use and death. So any dependence between the drug and death is causal. So we can just look at conditional correlations.
Assumptions used: we need the empirical p(A,Y) from our RCT, and the assumption that the correct causal graph is A -> Y. No other assumptions needed.
Ideas here: you should be able to transfer information you learn from observed members in a group to others members of the same group. Otherwise, what is stats even doing?
Problem 2:
We perform an observational study, where doctors assign (or not) a drug based on observed patient vitals recorded in their case history file. Some of the patients die. Let A be a random variable representing whether the patient in our dataset from our study got the drug, Y be a random variable representing whether the patient in our study died, and C be the random variable representing the patient vitals used by the doctors to decide whether to give the drug or not. A new patient comes in which is from the same cohort as those in our study. If we do not get any additional information on this patient, should we give them the drug?
Solution: Give the drug if and only if \sum{c} E[Y = yes | A = yes, c] p(c) < \sum{c} E[Y = yes | A = no, c] p(c)
Intuition for why this is correct: we have not randomized the drug, but we recorded all the info doctors used to decide on whether to give the drug. Since case history C represents all possible confounding between A and Y, conditional on knowing C, any dependence between A and Y is causal. In other words, E[Y | A, C] gives a causal dependence of A and Y, conditional on C. But since we are not allowed to measure anything about the incoming patient, we have to average over the possible case histories the patient might have. Since the patient is postulated to have come from the same dataset as those in our study, it is reasonable to average over the observed case histories in our study. This recovers the above formula.
Assumptions used: we need the empirical p(A,C,Y) from our study, and the assumption that the correct causal graph is C -> A -> Y, C -> Y. No other assumptions needed.
Ideas here: this is isomorphic to the smoking lesion problem. The idea here is you can't use observed correlations if there are confounders, you have to adjust for confounders properly using the g-formula (the formula in the answer).
Problem 3:
We perform a partially randomized and partially observational longitudinal study, where patients are randomly assigned (or not) a drug at time 0, then their vitals at time 1 are recorded in a file, and based on those vitals, and the treatment assignment history at time 0, doctors may (or not), decide to give them more of the drug. Afterwards, at time 2, some patients die (or not). Let A0 be a random variable representing whether the patient in our dataset from our study got the drug at time 0, A1 be a random variable representing whether the patient in our dataset from our study got the drug at time 1, Y be a random variable representing whether the patient in our study died, and C be the random variable representing the case history used by the doctors to decide whether to give the drug or not at time 1. A new patient comes in which is from the same cohort as those in our study. If we do not get any additional information on this patient, should we give them the drug, and if so at what time points?
Solution: Use the drug assignment policy (a0,a1) that minimizes \sum{c} E[Y = yes | A1 = a1, c, A0 = a0] p(c | A0 = a0).
Intuition for why this is correct: we have randomized A0, but have not randomized A1, and we are interested in the joint effect of both A0 and A1 on Y. We know C is a confounder for A1, so we have to adjust for it somehow as in Problem 2, otherwise an observed dependence of A1 and Y will contain a non-causal component through C. However, C is not a confounder for the relationship of A0 and Y. Conditional on A0 and C, the relationship between A1 and Y is entirely causal, so E[Y | A1, C, A0] is a causal quantity. However, for the incoming patient, we are not allowed to measure C, so we have to average over C as before in problem 2. However, in our case C is an effect of A0, which means we can't just average the base rates for case histories, we have to take into account what happened at time 0, in other words the causal effect of A0 on C. Because in our graph, there are no confounders between A0 and C, the causal relationship can be represented by p(C | A0) (no confounders means correlation equals causation). Since A0 also has no confounders for Y, E[Y | A1, C, A0], weighted by p(C | A0) gives us the right causal relationship between {A0,A1} and Y.
Assumptions used: we need the empirical p(A0,C,A1,Y) from our study, and the assumption that the correct causal graph is A0 -> C -> A1 -> Y, A0 -> A1, A0 -> Y, and we possibly allow that there is an unrestricted hidden variable U that is a parent of both C and Y. No other assumptions needed.
Ideas here: simply knowing that you have confounders is not enough, you have to pay attention to the precise causal relationships to figure out what the right thing to do is. In this case, C is a 'time-varying confounder,' and requires a more complicated adjustment that takes into account that the confounder is also an effect of an earlier treatment.
Problem 4:
We consider a (hypothetical) observational study of coprophagic treatment of stomach cancer. It is known (for the purposes of this hypothetical example) that coprophagia's protective effect vs cancer is due to the presence of certain types of intestinal flora in feces. At the same time, people who engage in coprophagic behavior naturally are not a random sampling of the population, and therefore may be more likely than average to end up with stomach cancer. Let A be a random variable representing whether those in our study engaged in coprophagic behavior, let W be the random variable representing the presence of beneficial intestinal flora, let Y be the random variable representing the presence of stomach cancer, and let U be some unrestricted hidden variable which may influence both coprophagia and stomach cancer. A new patient at risk for stomach cancer comes in which is from the same cohort as those in our study. If we do not get any additional information on this patient, should we give them the coprophagic treatment as a preventative measure?
Solution: Yes, if and only if \sum{w} p(W = w | A = yes) \sum{a} E[Y = yes | W = w, A = a) p(A = a) < \sum{w} p(W = w | A = no) \sum{a} E[Y = yes | W = w, A = a) p(A = a)
Intuition for why this is correct: since W is independent of confounders for A and Y, and A only affects Y through W, the effect of A on Y decomposes/factorizes into an effect of A on W, and an effect of W on Y, averaged over possible values W could take. The effect of A on W is not confounded by anything, and so is equal to p(W | A). The effect of W on Y is confounder by A, but given our assumptions, conditioning on A is sufficient to remove all confounding for the effect, which gives us \sum{A} p(Y | W,A) p(A). This gives above formula.
Assumptions used: we need the empirical p(A,C,Y) from our study, and the assumption that the correct causal graph is A -> W -> Y, and there is an unrestricted hidden variable U that is a parent of both A and Y. No other assumptions needed.
Ideas here: sometimes your independences let you factorize effects into other effects, similarly to how Bayesian networks factorize. This lets you solve problems that might seem unsolvable due to the presence of unobserved confounding.
↑ comment by nshepperd · 2013-12-15T15:41:19.552Z · LW(p) · GW(p)
The first (and bigger) problem is you are committing an ontological error (probability distributions are not about causality but about uncertainty. It doesn't matter if you are B or F about it).
I don't know what you mean by this. Probability distributions can be about whatever you want — it makes perfect sense to speak of "the probability that the cause of X is Y, given some evidence".
↑ comment by pallas · 2013-12-10T18:12:50.482Z · LW(p) · GW(p)
My comment above strongly called into question whether CDT gives the right answers. Therefore I wouldn't try to reinvent CDT with a different language. For instance, in the post I suggest that we should care about "all" the outcomes, not only the one happening in the future. I've first read about this idea in Paul Almond's paper on decision theory. An excerpt that might be of interest:
Replies from: CronoDAS, pallasSuppose the universe is deterministic, so that the state of the universe at any time completely determines its state at some later time. Suppose at the present time, just before time t_now, you have a choice to make. There is a cup of coffee on a table in front of you and have to decide whether to drink it. Before you decide, let us consider the state of the universe at some time, t_sooner, which is earlier than the present. The state of the universe at t_sooner should have been one from which your later decision, whatever it is going to be, can be determined: If you eventually end up drinking the coffee at t_now, this should be implied by the universe at t_sooner. Assume we do not know whether you are going to drink the coffee. We do not know whether the state of the universe at t_sooner was one that led to you drinking the coffee. Suppose that there were a number of conceivable states of the universe at t_sooner, each consistent with what you know in the present, which implied futures in which you drink the coffee at tnow. Let us call these states D1,D2,D3,…Dn. Suppose also that there were a number of conceivable states of the universe at t_sooner, each consistent with what you know in the present, which implied futures in which you do not drink the coffee at t_now. Let us call these states N1,N2,N3,…Nn. Suppose that you just drunk the coffee at t_now. You would now know that the state of the universe at t_sooner was one of the states D1,D2,D3,…Dn. Suppose now that you did not drink the coffee at t_now. You would now know that the state of the universe at t_sooner was one of the states N1,N2,N3,…Nn. Consider now the situation in the present, just before t_now, when you are faced with deciding whether to drink the coffee. If you choose to drink the coffee then at t_sooner the universe will have been in one of the states D1, D2, D3,…Dn and if you choose not to drink the coffee then at t_sooner the universe will have been in one of the states N1,N2,N3,…Nn. From your perspective, your choice is determining the previous state of the universe, as if backward causality were operating. From your perspective, when you are faced with choosing whether or not to drink the coffee, you are able to choose whether you want to live in a universe which was in one of the states D1,D2,D3,…Dn or one of the states N1,N2,N3,…Nn in the past. Of course, there is no magical backward causality effect operating here: The reality is that it is your decision which is being determined by the earlier state of the universe. However, this does nothing to change how things appear from your perspective. Why is it that Newcomb’s paradox worries people so much, while the same issue arising with everyday decisions does not seem to cause the same concern? The main reason is probably that the issue is less obvious outside the scope of contrived situations like that in Newcomb’s paradox. With the example I have been discussing here, you get to choose the state of the universe in the past, but only in very general terms: You know that you can choose to live in a universe that, in the past, was in one of the states D1,D2,D3,…Dn, but you are not confronted with specific details about one of these states, such as knowing that the universe had a specific state in which some money was placed in a certain box (which is how the backward causality seems to operate in Newcomb’s paradox). It may make it seem more like an abstract, philosophical issue than a real problem. In reality, the lack of specific knowledge should not make us feel any better: In both situations you seem to be choosing the past as well as the future. You might say that you do not really get to choose the previous state of the universe, because it was in fact your decision that was determined by the previous state, but you could as well say the same about your decision to drink or not drink the coffee: You could say that whether you drink the coffee was determined by some earlier state of the universe, so you have only the appearance of a choice. When making choices we act as if we can decide, and this issue of the past being apparently dependent on our choices is no different from the normal consequences of our future being apparently dependent on our choices, even though our choices are themselves dependent on other things: We can act as if we choose it.
↑ comment by CronoDAS · 2013-12-13T07:40:28.769Z · LW(p) · GW(p)
This quote seems to be endorsing the Mind Projection Fallacy; learning about the past doesn't seem to me to be the same thing as determining it...
Replies from: pallas↑ comment by pallas · 2013-12-13T17:04:21.617Z · LW(p) · GW(p)
It goes the other way round. An excerpt of my post (section Newcomb's Problem's problem of free will):
Perceiving time without an inherent “arrow” is not new to science and philosophy, but still, readers of this post will probably need a compelling reason why this view would be more goal-tracking. Considering the Newcomb’s Problem a reason can be given: Intuitively, the past seems much more “settled” to us than the future. But it seems to me that this notion is confounded as we often know more about the past than we know about the future. This could tempt us to project this disbalance of knowledge onto the universe such that we perceive the past as settled and unswayable in contrast to a shapeable future. However, such a conventional set of intuitions conflicts strongly with us picking only one box. These intuitions would tell us that we cannot affect the content of the box; it is already filled or empty since it has been prepared in the now inaccessible past.
↑ comment by pallas · 2013-12-10T18:41:26.530Z · LW(p) · GW(p)
I claim EDT is irrepairably broken on far less exotic problems than Parfit's hitchhiker. Problems like "should I give drugs to patients based on the results of this observational study?"
This seems to be a matter of screening off. Once we don't prescribe drugs because of evidential reasoning we don't learn anything new about the health of the patient. I would only not prescripe the drug if a credible instance with forecasting power (for instance Omega) shows to me that generally healthy patients (who show suspicious symptoms) go to doctors who endorse evidential reasoning and unhealthy patients go to conventional causal doctors. This sounds counterintuitive, but structurally it is equal to Newcomb's Problem: The patient corresponds to the box, we know it already "has" a specific value, but we don't know it yet. Choosing only box B (or not to give the drug) would be the option that is only compatible with the more desirable past where Omega has put the million into the box (or where the patient has been healthy all along).
comment by [deleted] · 2013-12-10T01:39:39.136Z · LW(p) · GW(p)
Agents who precommit to chocolate ice cream are in no sense better off than otherwise, as precommitment has no effect on which soda is assigned.
Replies from: Jiro↑ comment by Jiro · 2013-12-10T22:16:35.471Z · LW(p) · GW(p)
If all people who drink a certain soda will pick the vanilla ice cream, then you can't precommit to chocolate ice cream before drinking the soda. It is a premise of the scenario that people who drink a certain soda won't pick the chocolate, and precommitting implies you will pick the chocolate--precommitting is incompatible with the premise. You'd try to precommit, drink the wrong soda, and you'd find that for some reaosn you just have to give up your precommitment.
This is just a special case of the fact that choosing an option is incompatible with your "choice" being determined by something else.
comment by CronoDAS · 2013-12-13T08:44:49.419Z · LW(p) · GW(p)
The A-B game seems incoherent, in that I'm not actually making a decision. I would be glad to learn that I had "chosen" A, but as per the statement of the problem, if I'm doomed by the B gene to choose B, then I never had the option to choose A in the first place. I don't know how to do decision theory for an agent with a corrupted thinking process described in the Newcomb's Soda problem. For example, I could model it as "I have a 10 percent chance of having perfect free will, in which case I should take the vanilla ice cream, and in the other 90% of the time I'm magically compelled to obey the soda's influence so my reasoned answer won't actually matter" but that doesn't seem quite right...
comment by CalmCanary · 2013-12-10T19:56:11.611Z · LW(p) · GW(p)
Presumably, if you use E to decide in Newcomb's soda, the decisions of agents not using E are screened off, so you should only calculate the relevant probabilities using data from agents using E. If we assume E does in fact recommend to eat the chocolate ice cream, 50% of E agents will drink chocolate soda, 50% will drink the vanilla soda (assuming reasonable experimental design), and 100% will eat the chocolate ice cream. Therefore, given that you use E, there is no correlation between your decision and receiving the $1,000,000, so you might as well eat the vanilla and get the $1000. Therefore E does not actually recommend eating the chocolate ice cream.
Note that this reasoning does not generalize to Newcomb's problem. If E agents take one box, Omega will predict that they will all take one box, so they all get the payoff and the correlation survives.
Replies from: pallas↑ comment by pallas · 2013-12-10T20:42:55.341Z · LW(p) · GW(p)
Presumably, if you use E to decide in Newcomb's soda, the decisions of agents not using E are screened off, so you should only calculate the relevant probabilities using data from agents using E.
Can you show where the screening off would apply (like A screens off B from C)?