Posts

Simulation argument meets decision theory 2014-09-24T10:47:39.562Z
Chocolate Ice Cream After All? 2013-12-09T21:09:10.841Z

Comments

Comment by pallas on Simulation argument meets decision theory · 2014-09-27T11:58:35.418Z · LW · GW

I agree. It seems to me that the speciality of the Necomb Problem is that actions "influence" states and that this is the reason why the dominance principle alone isn't giving the right answer. The same applies to this game. Your action (sim or not sim) determines the probability of which agent you have been all along and therefore "influences" the states of the game, whether you are X or X*. Many people dislike this use of the word "influence" but I think there are some good reasons in favour of a broader use of it (eg. quantum entanglement).

Comment by pallas on Simulation argument meets decision theory · 2014-09-24T20:33:36.376Z · LW · GW

Thanks for mentioning this. I know this wasn't put very nicely.
Imagine you were a very selfish person X only caring about yourself. If I make a really good copy of X which is then placed 100 meters next to X, then this copy X only cares about the spatiotemporal dots of what we define X. Both agents, X and X, are identical if we formalize their algorithms incorporating indexical information. If we don't do that then a disparity remains, namely that X is different to X in that, intrinsically, X only cares about the set of spatiotemporal dots constituting X. The same goes for X accordingly. But this semantical issue doesn't seem to be relevant for the decision problem itself. The kind of similarity that is of interest here seems to be the one that determines similiar behavior in such games. (Probably you could set up games where the non-indexical formalization of the agents X and X are relevantly different, I merely claim that this game is not one of them)

Comment by pallas on Chocolate Ice Cream After All? · 2013-12-13T17:04:21.617Z · LW · GW

It goes the other way round. An excerpt of my post (section Newcomb's Problem's problem of free will):

Perceiving time without an inherent “arrow” is not new to science and philosophy, but still, readers of this post will probably need a compelling reason why this view would be more goal-tracking. Considering the Newcomb’s Problem a reason can be given: Intuitively, the past seems much more “settled” to us than the future. But it seems to me that this notion is confounded as we often know more about the past than we know about the future. This could tempt us to project this disbalance of knowledge onto the universe such that we perceive the past as settled and unswayable in contrast to a shapeable future. However, such a conventional set of intuitions conflicts strongly with us picking only one box. These intuitions would tell us that we cannot affect the content of the box; it is already filled or empty since it has been prepared in the now inaccessible past.

Comment by pallas on Chocolate Ice Cream After All? · 2013-12-12T11:58:22.449Z · LW · GW

Look, HIV patients who get HAART die more often (because people who get HAART are already very sick). We don't get to see the health status confounder because we don't get to observe everything we want. Given this, is HAART in fact killing people, or not?

It is not that clear to me what we know about HAART in this game. For instance, in case we know nothing about it and we only observe logical equivalences (in fact rather probabilistic tendencies) in the form "HAART" <--> "Patient dies (within a specified time interval)" and "no HAART" <--> "Patient survives" it wouldn't be irrational to reject the treatment.

Once we know more about HAART, for instance, that the probabilistic tendencies were due to unknowingly comparing sick people to healthy people, we then can figure out that P( patient survives | sick, HAART) > P (patient survives | sick, no HAART) and that P( patient survives | healthy, HAART)< P(patient survives | healthy, no HAART). Knowing that much, choosing not to give the drug would be a foolish thing to do.
If we come to know that a particular reasoning R leads to not prescribing the drug (even after the update above) is very strongly correlated with having patients that are completely healthy but show false-positive clinical test results, then not prescribing the drug would be the better thing to do. This, of course, would require that this new piece of information brings about true predictions about future cases (which makes the scenario quite unlikely, though considering the theoretical debate it might be relevant).

Generally, I think that drawing causal diagrams is a very useful heuristic in "everyday science", since replacing the term causality with all the conditionals involved might be confusing. Maybe this is a reason why some people tend to think that evidential reasoning is defined to only consider plain conditionals (in this example P(survival| HAART)) but not more background data. Because otherwise, in effortful ways you could receive the same answer as causal reasoners do but what would be the point of imitating CDT?

I think it is exactly the other way round. It's all about conditionals. It seems to me that a bayesian writes down "causal connection" in his/her map after updating on sophisticated sets of correlations. It seems impossible to completely rule out confounding at any place. Since evidential reasoning would suggest not to prescribe the drug in the false-positive scenario above its output is not similiar to the one conventional CDT produces. Differences between CDT and the non-naive evidential approach are described here as well: http://lesswrong.com/lw/j5j/chocolate_ice_cream_after_all/a6lh

It seems that CDT-supporters only do A if there is a causal mechanism connecting it with the desirable outcome B. An evidential reasoner would also do A if he knew that there would be no causal mechanism connecting it to B, but a true (but purely correlative) prediction stating the logical equivalences A<-->B and ~A <--> ~B.

Comment by pallas on Chocolate Ice Cream After All? · 2013-12-11T09:39:45.474Z · LW · GW

I agree that it is challenging to assign forecasting power to a study, as we're uncertain about lots of background conditions. There is forecasting power to the degree that the set A of all variables involved with previous subjects allow for predictions about the set A' of variables involved in our case. Though when we deal with Omega who is defined to make true predictions, then we need to take this forecasting power into account, no matter what the underlying mechanism is. I mean, what if Omega in Newcomb's Problem was defined to make true predictions and you don't know anything about the underlying mechanism? Wouldn't you one-box after all? Let's call Omega's prediction P and the future event F. Once Omega's prediction are defined to be true, we can denote the following logical equivalences: P(1 boxing) <--> F(1 boxing) and P(2 boxing) <--> P(2 boxing). Given this conditions, it impossible to 2-box when box B is filled with a million dollars (you could also formulate it in terms of probabilities where such an impossible event would have the probability of 0). I admit that we have to be cautious when we deal with instances that are not defined to make true predictions.

Suppose it is well-known that the wealthy in your country are more likely to adopt a certain distinctive manner of speaking due to the mysterious HavingRichParents gene. If you desire money, could you choose to have this gene by training yourself to speak in this way?

My answer depends on the specific set-up. What exactly do we mean with "It is well-known"? It doesn't seem to be a study that would describe the set A of all factors involved which we then could use to derive A' that applied to our own case. Unless we define "It is well-known" as a instance that allows for predictions in the direction A --> A', I see little reason to assume a forecasting power. Without forecasting power, screening off applies and it would be foolish to train the distinctive manner of speaking. If we specified the game in a way that there is forecasting power at work (or at least we had reason to believe so), depending on your definition of choice (I prefer one that is devoid of free will) you can or cannot choose the gene. These kind of thoughts are listed here or in the section "Newcomb’s Problem’s Problem of Free Will" in the post.

Comment by pallas on Chocolate Ice Cream After All? · 2013-12-11T00:56:32.109Z · LW · GW

If lots of subjects were using CDT or EDT, they would all be choosing ice cream independently of their soda, and we wouldn't see that correlation (except maybe by coincidence). So it doesn't have to be stated in the problem that other subjects aren't using evidential reasoning--it can be seen plainly from the axioms! To assume that they are reasoning as you are is to assume a contradiction.

If lots of subjects were using CDT or EDT, they would be choosing ice cream independently of their soda iff the soda has no influence on whether they argue according to CDT or EDT. It is no logical contradiction to say that the sodas might affect which decision theoretic intuitions a subject is going to have. As long as we don't specify what this subconscious desire for ice cream exactly means, it is thinkable that the sodas imperceptibly affect our decision algorithm. In such a case, most of the V-I people (the fraction originating from V-S) would be attracted to causal reasoning, whereas most of the Ch-I people (the fraction originating from Ch-S) would find the evidential approach compelling. One can say now that the sodas "obviously" do not affect one's decision theory, but this clearly had to be pointed out when introducing a "subconscious desire."
I agree that once it is specified that we are the only agents using decision theory, screening off applies. But the game is defined in a way that we are subjects of a study where all the subjects are rewarded with money:

(an excerpt of the definition in Yudkowsky (2010))

It so happens that all participants in the study who test the Chocolate Soda are rewarded with a million dollars after the study is over, while participants in the study who test the Vanilla Soda receive nothing. But subjects who actually eat vanilla ice cream receive an additional thousand dollars, while subjects who actually eat chocolate ice cream receive no additional payment.

After reading this, it is not a priori clear to me that I would be the only subject who knows about the money at stake. To the contrary, as one of many subjects I assume that I know as much as other subjects know about the setting. Once other subjects know about the money they probably also think about whether choosing Ch-I or V-I produces the better outcome. It seems to me that all the agents base their decision on some sort of intuition about which would be the correct decisional algorithm.

To sum up, I tend to assume that other agents play a decision theoretic game as well and that the soda might affect their decision theoretic intuitions. Even if we assigned a low prior to the event that the sodas affect the subject's decision algorithms, the derived reasoning would not be invalid but it's power would shrink in proportion to the prior. Finally, it is definetly not a contradictory statement to say that the soda affects how the subject's decide and that the subject's use CDT or EDT.

Comment by pallas on Chocolate Ice Cream After All? · 2013-12-10T20:42:55.341Z · LW · GW

Presumably, if you use E to decide in Newcomb's soda, the decisions of agents not using E are screened off, so you should only calculate the relevant probabilities using data from agents using E.

Can you show where the screening off would apply (like A screens off B from C)?

Comment by pallas on Chocolate Ice Cream After All? · 2013-12-10T18:42:21.017Z · LW · GW

I claim EDT is irrepairably broken on far less exotic problems than Parfit's hitchhiker. Problems like "should I give drugs to patients based on the results of this observational study?"

This seems to be a matter of screening off. Once we don't prescribe drugs because of evidential reasoning we don't learn anything new about the health of the patient. I would only not prescripe the drug if a credible instance with forecasting power (for instance Omega) shows to me that generally healthy patients (who show suspicious symptoms) go to doctors who endorse evidential reasoning and unhealthy patients go to conventional causal doctors. This sounds counterintuitive, but structurally it is equal to Newcomb's Problem: The patient corresponds to the box, we know it already "has" a specific value, but we don't know it yet. Choosing only box B (or not to give the drug) would be the option that is only compatible with the more desirable past where Omega has put the million into the box (or where the patient has been healthy all along).

Comment by pallas on Chocolate Ice Cream After All? · 2013-12-10T18:12:50.482Z · LW · GW

My comment above strongly called into question whether CDT gives the right answers. Therefore I wouldn't try to reinvent CDT with a different language. For instance, in the post I suggest that we should care about "all" the outcomes, not only the one happening in the future. I've first read about this idea in Paul Almond's paper on decision theory. An excerpt that might be of interest:

Suppose the universe is deterministic, so that the state of the universe at any time completely determines its state at some later time. Suppose at the present time, just before time t_now, you have a choice to make. There is a cup of coffee on a table in front of you and have to decide whether to drink it. Before you decide, let us consider the state of the universe at some time, t_sooner, which is earlier than the present. The state of the universe at t_sooner should have been one from which your later decision, whatever it is going to be, can be determined: If you eventually end up drinking the coffee at t_now, this should be implied by the universe at t_sooner. Assume we do not know whether you are going to drink the coffee. We do not know whether the state of the universe at t_sooner was one that led to you drinking the coffee. Suppose that there were a number of conceivable states of the universe at t_sooner, each consistent with what you know in the present, which implied futures in which you drink the coffee at tnow. Let us call these states D1,D2,D3,…Dn. Suppose also that there were a number of conceivable states of the universe at t_sooner, each consistent with what you know in the present, which implied futures in which you do not drink the coffee at t_now. Let us call these states N1,N2,N3,…Nn. Suppose that you just drunk the coffee at t_now. You would now know that the state of the universe at t_sooner was one of the states D1,D2,D3,…Dn. Suppose now that you did not drink the coffee at t_now. You would now know that the state of the universe at t_sooner was one of the states N1,N2,N3,…Nn. Consider now the situation in the present, just before t_now, when you are faced with deciding whether to drink the coffee. If you choose to drink the coffee then at t_sooner the universe will have been in one of the states D1, D2, D3,…Dn and if you choose not to drink the coffee then at t_sooner the universe will have been in one of the states N1,N2,N3,…Nn. From your perspective, your choice is determining the previous state of the universe, as if backward causality were operating. From your perspective, when you are faced with choosing whether or not to drink the coffee, you are able to choose whether you want to live in a universe which was in one of the states D1,D2,D3,…Dn or one of the states N1,N2,N3,…Nn in the past. Of course, there is no magical backward causality effect operating here: The reality is that it is your decision which is being determined by the earlier state of the universe. However, this does nothing to change how things appear from your perspective. Why is it that Newcomb’s paradox worries people so much, while the same issue arising with everyday decisions does not seem to cause the same concern? The main reason is probably that the issue is less obvious outside the scope of contrived situations like that in Newcomb’s paradox. With the example I have been discussing here, you get to choose the state of the universe in the past, but only in very general terms: You know that you can choose to live in a universe that, in the past, was in one of the states D1,D2,D3,…Dn, but you are not confronted with specific details about one of these states, such as knowing that the universe had a specific state in which some money was placed in a certain box (which is how the backward causality seems to operate in Newcomb’s paradox). It may make it seem more like an abstract, philosophical issue than a real problem. In reality, the lack of specific knowledge should not make us feel any better: In both situations you seem to be choosing the past as well as the future. You might say that you do not really get to choose the previous state of the universe, because it was in fact your decision that was determined by the previous state, but you could as well say the same about your decision to drink or not drink the coffee: You could say that whether you drink the coffee was determined by some earlier state of the universe, so you have only the appearance of a choice. When making choices we act as if we can decide, and this issue of the past being apparently dependent on our choices is no different from the normal consequences of our future being apparently dependent on our choices, even though our choices are themselves dependent on other things: We can act as if we choose it.

Comment by pallas on Chocolate Ice Cream After All? · 2013-12-10T17:20:47.157Z · LW · GW

In the post you can read that I am not endorsing plain EDT, as it seems to lose in problems like parfit's hitchhiker or counterfactual mugging. But in other games, for instance in Newcomblike problems, the fundamental trait of evidential reasoning seems to give the right answers (as long as one knows how to apply conditional independencies!). It sounds to me like a straw man that EDT should give foolish answers (pass on crucial issues like screening off so that for instance we should waste money to make it more likely that we are rich if we don't know our bank balance) or otherwise be isomorphic with CDT. I think that a proper use of evidential reasoning leads to one-boxing in Newcomb's Problem, chewing gum in the medical version of Solomon's Problem (assuming that previous subjects knew nothing about any interdepence of the chewing gum and throat abscess, otherwise it might change) and pick chocolate ice cream in Newcomb's Soda (except one would specify that we were the only study-subjects who know about the interdepence of sodas and ice creams). CDT, on the other hand, seems to imply a buggy fatalistic component. It doesn't suggest investing resources that make a more desirable past more likely. (I admit that this is counterintuitive, but give it a chance and read the section "Newcomb's Problem's Problem of Free Will" as it might be game-changing if it turns out to be true.)

Comment by pallas on Chocolate Ice Cream After All? · 2013-12-10T15:14:49.682Z · LW · GW

Assuming i), I would rather say that when Omega tells me that if I choose carrots I'll have a heart attack, then almost certainly I'm not in a freak world, but in a "normal" world where there is a causal mechanism (as common sense would call it). But the point stands that there is no necessity for a causal mechanism so that c) can be true and the game can be coherent. (Again, this point only stands as long as one's definition of causal mechanism excludes the freak case.)

Comment by pallas on Chocolate Ice Cream After All? · 2013-12-10T14:30:31.631Z · LW · GW

I think I agree. But I would formulate it otherwise:

i) Omega's prediction are true. ii) Omega predicts that carrot-choosers have heart attacks.

c) Therefore, carrot-choosers have heart attacks.

As soon as you accept i), c) follows if we add ii). I don't know how you define "causal mechanism". But I can imagine a possible world where no biological mechanism connects carrot-choosing with heart attacks but where "accidentally" all the carrot-choosers have heart-attacks (Let's imagine running worlds on a computer countless times. One day we might observe such a freak world). Then c) would be true without there being some sort of "causal mechanism" (as you might define it I suppose?). If you say that in such a freak world carrot-choosing and heart attacks are causally connected, then I would agree that c) can only be true if there is a underlying causal mechanism.

Comment by pallas on Chocolate Ice Cream After All? · 2013-12-10T13:54:00.852Z · LW · GW

Those who pick a carrot after hearing Omega's prediction, or without hearing the prediction? Those are two very different situations, and I am not sure which one you meant.

That's a good point. I agree with you that it is crucial to keep apart those two situations. This is exactly what I was trying to address considering Newcomb's Problem and Newcomb's Soda. What do the agents (previous study-subjects) know? It seems to me that the games aren't defined precise enough.
Once we specify a game in a way that all the agents hear Omega's prediction (like in Newcomb's Problem), the prediction provides actionable advice as all the agents belong to the same reference class. If we, and we alone, know about a prediction (whereas other agents don't) the situation is different and the actionable advice is not provided anymore, at least not to the same extent.
When I propose a game where Omega predicts whether people pick carrots or not and I don't specify that this only applies to those who don't know about the prediction then I would not assume prima facie that the prediction only applies to those who don't know about the prediction. Without further specification, I would assume that it applies to "people" which is a superset of "people who know of the prediction".

Comment by pallas on Chocolate Ice Cream After All? · 2013-12-10T12:11:21.386Z · LW · GW

I think it is more an attempt to show that a proper use of updating results in evidential reasoners giving the right answers in Newcomblike problems. Further more, it is an attempt to show that the medical version of Solomon's Problem and Newcomb's Soda aren't defined precise enough since it is not clear what the study-subjects were aware of. Another part tries to show that people get confused when thinking about Newcomb's Problem because they use a dated perception of time as well as a problematic notion of free will.

Comment by pallas on Chocolate Ice Cream After All? · 2013-12-10T12:02:45.674Z · LW · GW

Thanks for the comment!

However, in the A,B-Game we assume that a specific gene makes people presented with two options choose the worse one -- please note that I have not mentioned Omega in this sentence yet! So the claim is not that Omega is able to predict something, but that the gene can determine something, even in absence of the Omega. It's no longer about Omega's superior human-predicting powers; the Omega is there merely to explain the powers of the gene.

I think there might be a misunderstanding. Although I don't believe it to be impossible that a gene causes you to think in specific ways, in the setting of the game such a mechanism is not required. You can also imagine a game where Omega predicts that those who pick a carrot out of a basket of vegetables are the ones that will die shortly of a heart attack. As long as we believe in Omega's forecasting power, its statements are relevant even if we cannot point at any underlying causal mechanisms. As long as the predicted situation is logically possible (here, all agents that pick the carrot die), we don't need to reject Omega's prediction just because such a compilation of events would be unlikely. Though we might call Omega's predictions into question. Still, as long as we believe in its forecasting power (after such a update), we have to take the prediction into account. Hence, the A,B-Game holds even if you don't know of any causal connection between the genes and the behaviour, we only need a credible Omega.

Comment by pallas on Be Skeptical of Correlational Studies · 2013-11-28T12:00:39.539Z · LW · GW

Correlation by itself without known connecting mechanisms or relationships does not imply causation.

The bayesian approach would suggest that we assign a causation-credence to every correlation we observe. Of course detecting confounders is very important since it provides you with updates. However, a correlation without known connecting mechanisms does imply causation. In particular it does it probabilistically. A bayesian updater would prefer talking about credences in causation which can be shifted up and downwards. It would be a (sometimes dangerous) simplification to in our map deal with discrete values like "just correlation" and "real causation". However, such a simplification may be of use as a heuristic in everyday life, still I'd suggest not to overgeneralize it.

Comment by pallas on Weak repugnant conclusion need not be so repugnant given fixed resources · 2013-11-23T12:26:55.891Z · LW · GW

There is no contradiction to rejecting total utilitarianism and choosing torture.

For one thing, I compared choosing torture with the repugnant conclusion, not with total utilitarianism. For another thing, I didn't suspect there to be any contradiction. However, agents with intransitive dispositions are exploitable.

You can also descriptively say that, structurally, refusing total utilitarianism because of the repugnant conclusion is equal to refusing deontology because we've realise that two deontological absolutes can contradict each other. Or, more simply, refusing X because of A is structurally the same as refusing X' because of A'.

My fault, I should have been more precise. I wanted to say that the two repugnant conclusions (one based on dust specks the other one based on "17") are similiar because quite some people would, upon reflection, refuse any kind of scope neglect that renders one intransitive.

Just because one can reject total utilitarianism (or anything) for erroneous reasons, does not mean that every reason for rejecting total utilitarianism must be an error.

I agree. Again, I didn't claim the contrary to be true. I didn't argue against the rejection of total utilitarianism. However, I argued against the repugnant conclusion, since it simply repeats that evolution brought about limbic systems that make human brains choose in intransitive ways. For the case that we in the dust speck example considered this to be a bias, the same would apply in the repugnant conclusion.

Comment by pallas on Weak repugnant conclusion need not be so repugnant given fixed resources · 2013-11-19T00:18:41.481Z · LW · GW

I generally don't see why the conclusion is considered to be repugnant not only as a reaction of gut-feelings but also upon reflection, since we simply deal with another case of "dust speck vs torture", an example that illustrates how our limbic system is not adapted in a way that it could scale up emotions linearly and prevent intransitive dispositions.

We can imagine a world in which evolutionary mechanisms brought forth human brains that by some sort of limbic limitation simply cannot imagine the integer "17", whereas all the other numbers from 1 to 20 can be imagined just as we would expect it. In such a world a repugnant conclusion against total utilitarianism could sound somewhat like "Following total utilitarianism you had to prefer a world A where 5 people are being tortured to a world B where "only" 17 people are being tortured. This seems to be absurd." In both cases we deal with intransitive dispositions. In the first case people tend to adjust downward the disutility of a single dust speck so that when we incrementally examine possible trades between dust speck and torture people find that A<B<C<D<E and E<A. The same goes for the second case. People think that 5 people being tortured is less bad than 10 people, 10 people is less bad than 15 people, but 15 is worse than 17 as the last outcome cannot be imagined as vividly as the others.

I don't want to make the case that some moral theory seems to be "true". I don't know what that even could mean. Though I think can descriptively say that, structurally, refusing total utilitarianism because of the repugnant conclusion is equal to refusing total utilitarianism in another world where we are bad at imagining "17" and where we find it absurd that 17 people being tortured could be considered as worse than 5 peope being tortured.

Comment by pallas on Welcome to Less Wrong! (6th thread, July 2013) · 2013-11-18T00:50:17.285Z · LW · GW

Hi! I've been lurking around on the blog. I look forward to actively engage from now. Generally, I'm strongly interested in AI research, rationality in general, bayesian statistics and decision problems. I hope that I will keep on learning a lot and will also contribute useful insights for this community as it is very valuable what people here are about to do! So, see you on the "battlefield". Hi to everyone!