What is causality to an evidential decision theorist?
post by paulfchristiano · 2022-04-17T16:00:01.535Z · LW · GW · 26 commentsContents
Causality and conditional independence EDT = CDT Whence subjective causality? Punchline None 26 comments
(Subsumed by: Timeless Decision Theory, EDT=CDT [? · GW])
People sometimes object to evidential decision theory by saying: “It seems like the distinction between correlation and causation is really important to making good decisions in practice. So how can a theory like EDT, with no role for causality, possibly be right?”
Long-time readers probably know my answer, but I want to articulate it in a little bit more detail. This is essentially identical to the treatment of causality in Eliezer Yudkowsky’s manuscript Timeless Decision Theory, but much shorter and probably less clear.
Causality and conditional independence
If a system is well-described by a causal diagram, then it satisfies a complex set of statistical relationships. For example:
- In the causal graph A ⟶ B ⟶ C, the variables A and C are independent given B.
- In the graph A ⟶ B ⟵C, the variables A and C are independent, but are dependent given B.
To an evidential decision theorist, these kinds of statistical relationships are the whole story about causality, or at least about its relevance to decisions. We could still ask why such relationships exist, but the answer wouldn’t matter to what we should do.
EDT = CDT
Now suppose that I’m making a decision X, trying to optimize Y.
And suppose further that there is a complicated causal diagram containing X and Y, such that my beliefs satisfy all of the statistical relationships implied by that causal diagram.
Note that this diagram will necessarily contain me and all of the computation that goes into my decision, and so it will be (much) too large for me to reason about explicitly.
Then I claim that an evidential decision theorist will endorse the recommendations of CDT (using that causal diagram):
- EDT recommends maximizing the conditional expectation of Y, conditioned on all the inputs to X. Write Z for all of these inputs.
- It might be challenging to condition on all of Z, given limits on our introspective ability, but we’d recommend doing it if possible. (At least for the rationalist’s interpretation EDT, which evaluates expected utility conditioned on a fact of the form “I decided X given inputs Z.”)
- So if we can describe a heuristic that gives us the same answer as conditioning on all of Z, then an EDT agent will want to use it.
- I’ll argue that CDT is such a heuristic.
- In a causal diagram, there is an easy graphical condition (d-connectedness) to see whether (and how) X and Y are related given Z:
- We need to have a path from X to Y that satisfies certain properties:
- That path can start out moving upstream (i.e. against the causal arrows); it may switch from moving upstream to downstream at any time (including at the start); it must switch direction whenever it hits a node in Z; and it may only switch from moving downstream to upstream when it hits a node in Z.
- If Z includes exactly the causal parents of X, then it’s easy to check that the only way for X and Y to be d-connected is by a direct downstream path from X to Y.
- Under these conditions, it’s easy to see that intervening on X is the same as conditioning on X. (Indeed you could check this more directly from the definition of a causal intervention, which is structurally identical to conditioning in cases where we are already conditioning on all parents.)
Moreover, once the evidential decision-theorist’s problem is expressed this way, they can remove all of the causal nodes upstream of X, since they have no effect on the decision. This is particularly valuable because that contains all of the complexity of their own decision-making process (which they had no hope of modeling anyway).
So if the EDT agent can find a causal structure that reflects their (statistical) beliefs about the world, then they will end up making the same decision as a CDT agent who believes in the same causal structure.
Whence subjective causality?
You might think: causal diagrams encode a very specific kind of conditional independence structure. Why would we see that structure in the world so often? Is this just some weird linguistic game we are playing, where you can rig up some weird statistical structure that happens to give the same conclusions as more straightforward reasoning from causality?
Indeed, one easy way to have statistical relationships is to have “metaphysically fundamental” causality: in a world containing many variables, each of which is an independent stochastic function of its parents in some causal diagram, then those variables will satisfy all the conditional independencies implied by the that causal diagram.
If this were the only way that we got subjective causality, then there’d be no difference between EDT and CDT, and no one would care about whether we treated causality as subjective or metaphysically fundamental.
But it’s not. There are other sources for similar statistical relationships. And moreover, the “metaphysically fundamental causality” isn’t actually consistent with the subjective beliefs of a logically bounded agent.
We can illustrate both points with the calculator example from Yudkowsky’s manuscript:
- Suppose there are two calculators, one in Mongolia and one on Neptune, each computing the same function (whose value we don’t know) at the same instant.
- Our beliefs about the two calculators are correlated, since we know they compute the same function. This remains true after conditioning on all the physical facts about the two calculators.
- But in the “metaphysically fundamental” causal diagram, the results of the two calculators should be d-separated once we know the physical facts about them (since there isn’t even enough time for causal influences to propagate between them).
- We can recover the correct conditional independencies by adding a common cause of the two calculators, representing “what is the correct output of the calculation?” We might describe this as “logical” causality.
This kind of “logical” causality can lead to major deviations from the CDT recommendation in cases where the EDT agent’s decision is highly correlated with other facts about the environment through non-physically-causal channels. For example: if there are two identical agents, or if someone else is reasoning about the agent’s decision sufficiently accurately, then the EDT agent would be inclined to say that the logical facts about their decision “cause” physical facts about the world (and hence induce correlations), whereas a CDT agent would say that those correlations should be ignored.
Punchline
EDT and CDT agree under two conditions: (i) we require that our causal model of the world and our beliefs agree in the usual statistical sense, i.e. that our beliefs satisfy the conditional independencies implied by our causal model, (ii) we evaluate utility conditioned on “I make decision X after receiving inputs Z” rather than conditioning on “I make decision X in the current situation” without including relevant facts about the current situation.
In practice, I think the main way CDT and EDT differ is that CDT ends up in a complicated philosophical discussion about “what really is causality?” (and so splinters into a host of theories) while EDT picks a particular answer: for EDT, causality is completely characterized by condition (i), that our beliefs and our causal model agree. That makes it is obvious how to generalize causality to logical facts (or to arbitrary universes with very different laws), while recovering the usual behavior of causality in typical cases.
I believe the notion of causality that is relevant to EDT is the “right” one, because causality seems like a concept developed to make and understand decisions (both over evolutionary time and more importantly over cultural evolution) rather than something ontologically fundamental that is needed to even define a correct decision.
If we take this perspective, it doesn’t matter whether we use EDT or CDT. I think this perspective basically accounts for intuitions about the importance of causality to decision-making, as well as the empirical importance of causality, while removing most of the philosophical ambiguity about causality. And it’s a big part of why I don’t feel particularly confused about decision theory.
26 comments
Comments sorted by top scores.
comment by paulfchristiano · 2022-04-18T08:12:49.570Z · LW(p) · GW(p)
So if we can describe a heuristic that gives us the same answer as conditioning on all of Z, then an EDT agent will want to use it.
This is wrong or at least badly incomplete. I don't think it matters to the main point of this post (that EDT does "normal-looking causal inference" in normal cases). But it's pretty central for the actual live philosophical debates about EDT v CDT v TDT.
In particular, it's true that we'd like to condition on all of Z, but if we lack introspect access to parts of Z then this procedure won't do that. It ignores effects via Z but doesn't actually know the values in Z, so there's no real reason to ignore those effects. Actually handling this issue is very subtle and has been discussed a lot. I think it's fine if you use any algorithm A that conditions on A() = X, but in general it's very messy to talk about algorithms that take facts as inputs without knowing those facts.
comment by IlyaShpitser · 2022-04-17T19:27:05.784Z · LW(p) · GW(p)
For the benefit of other readers: this post is confused.
Specifically on this (although possibly also on other stuff): (a) causal and statistical DAGs are fundamentally not the same kind of object, and (b) no practical decision theory used by anyone includes the agent inside the DAG in the way this post describes.
---
"So if the EDT agent can find a causal structure that reflects their (statistical) beliefs about the world, then they will end up making the same decision as a CDT agent who believes in the same causal structure."
A -> B -> C and A <- B <- C reflect the same statistical beliefs about the world.
↑ comment by paulfchristiano · 2022-04-18T06:20:38.278Z · LW(p) · GW(p)
I can't tell if this is a terminological or substantive disagreement (it sounds terminological, but I don't think I yet understand it).
causal and statistical DAGs are fundamentally not the same kind of object
Could you say something about the difference and how it is relevant to this post? Like, which claim made in the post is this contradicting?
Is this an objection to "If a system is well-described by a causal diagram, then it satisfies a complex set of statistical relationships"? Or maybe "To an evidential decision theorist, these kinds of statistical relationships are the whole story about causality, or at least about its relevance to decisions."?
no practical decision theory used by anyone includes the agent inside the DAG in the way this post describes.
What is EDT if you don't include the agent inside the model of the world? Doesn't almost all philosophical discussion of EDT vs CDT involve inferences about the process generating the decision, and hence presume that we have beliefs about this process? Are you saying that "practical" communities use this language in a different way from the philosophical community? Or that "beliefs about the process generating decisions" aren't captured in the DAG?
A -> B -> C and A <- B <- C reflect the same statistical beliefs about the world.
That's true but I don't understand its relevance. I think this is probably related to the prior point about the agent including itself in the causal diagram. (Since e.g. decision --> A --> B --> C and decision --> A <-- B <-- C correspond to very different beliefs about the world.)
↑ comment by tailcalled · 2022-04-17T19:38:43.174Z · LW(p) · GW(p)
I disagree, it doesn't look confused to me.
Specifically on this (although possibly also on other stuff): (a) causal and statistical DAGs are fundamentally not the same kind of object,
The post explicitly discusses the different views of causality.
(b) no practical decision theory used by anyone includes the agent inside the DAG in the way this post describes.
That seems in line with what the post describes: "and so it will be (much) too large for me to reason about explicitly".
comment by Lukas Finnveden (Lanrian) · 2022-06-11T17:07:10.112Z · LW(p) · GW(p)
In a causal diagram, there is an easy graphical condition (d-connectedness) to see whether (and how) X and Y are related given Z:
We need to have a path from X to Y that satisfies certain properties:
That path can start out moving upstream (i.e. against the causal arrows); it may switch from moving upstream to downstream at any time (including at the start); it must switch direction whenever it hits a node in Z; and it may only switch from moving downstream to upstream when it hits a node in Z.
I think this is wrong, because it would imply that X and Y are d-connected in [X <- Z -> Y]. It should say:
That path can start out moving upstream (i.e. against the causal arrows); it may switch from moving upstream to downstream at any time (including at the start); it can only connect to a node in Z if it's currently moving downstream; and it must (and can only) switch from moving downstream to upstream when it hits a node in Z.
comment by Vanessa Kosoy (vanessa-kosoy) · 2022-04-18T06:01:24.731Z · LW(p) · GW(p)
The real problem is how to construct natural rich classes of hypotheses-for-agents which can be learned statistically efficiently. All the classical answers to this go through the cybernetic framework, which imposes a particular causal structure. At the same time, examples like repeated XOR blackmail or repeated counterfactual mugging are inconsistent with any Bayesian realization of that causal structure. Because of this, classical agents (e.g. Bayesian RL with some class of communicating POMDPs as its prior) fail to to converge to optimal behavior in those examples. On the other hand, infra-Bayesian agents do [? · GW] converge to optimal behavior. So, there is something to be confused about in decision theory, and arguably IB is the solution for that.
comment by tailcalled · 2022-04-29T09:05:38.502Z · LW(p) · GW(p)
One place where evidential decision theory gets used is with prediction markets. For instance here people are deciding between supporting or opposing a policy based on what happens when you condition on that policy being implemented. But this decision-making procedure is potentially flawed, for reasons that CDT can explain.
Arguably, the key problem is that the prediction market conditions on a different decision question (what do politicians choose?) than they are used to control (what policy should I support?). But there seems to be a subagent relationship between the two decisions, so this suggests something subtle happens when comparing CDT and EDT for subagent systems.
I haven't seen this interaction explored by anyone, and I confused myself a bunch when I tried to reason it out. But this feels like one of the biggest obstacles I have to accepting EDT=CDT.
comment by RyanCarey · 2022-04-18T20:56:33.933Z · LW(p) · GW(p)
You say two things that seem in conflict with one another.
[Excerpt 1] If a system is well-described by a causal diagram, then it satisfies a complex set of statistical relationships. For example ... To an evidential decision theorist, these kinds of statistical relationships are the whole story about causality, or at least about its relevance to decisions.
[Excerpt 2] [Suppose] that there is a complicated causal diagram containing X and Y, such that my beliefs satisfy all of the statistical relationships implied by that causal diagram. EDT recommends maximizing the conditional expectation of Y, conditioned on all the inputs to X. [emphasis added]
In [1], you say that the EDT agent only cares about the statistical relationships between variables, i.e. P(V) over the set of variables V in a Bayes net - a BN that apparently need not even be causal - nothing more.
In [2], you say that the EDT agent needs to know the parents of X. This indicates that the agent needs to know something that is not entailed by P(V), and something that is apparently causal.
Maybe you want the agent to know some causal relationships, i.e. the relationships with decision-parents, but not others?
Under these conditions, it’s easy to see that intervening on X is the same as conditioning on X.
This is true for decisions that are in the support, given the assignment to the parents, but not otherwise. CDT can form an opinion about actions that "never happen", whereas EDT cannot.
Replies from: tailcalled, ege-erdil↑ comment by tailcalled · 2022-04-18T21:32:02.514Z · LW(p) · GW(p)
In [2], you say that the EDT agent needs to know the parents of X. This indicates that the agent needs to know something that is not entailed by P(V).
"The parents of X" is stuff like the observations that agent has made, as well as the policy the agent uses. It is bog-standard for EDT to use this in its decisions, and because of the special nature of those variables, it does not require knowing an entire causal model.
↑ comment by Ege Erdil (ege-erdil) · 2022-04-18T21:10:20.294Z · LW(p) · GW(p)
The EDT agent needs to know the inputs to its own decision process to even make decisions at all, so I don't think there's a causal implication there. Obviously no decision theory can get off the ground if it's not permitted to have any inputs. It's just that in a causal model the inputs to a decision process would have to be causal arrows going from the inputs to the decision-maker.
If by "coinciding for decisions that are in the support" you mean what I think that means, then that's true re: actions that never happen, but it's not clear why actions that never happen should influence your assessment of how a decision theory works. Implicitly when you do anything probabilistic you assume that sets of null measure can be thrown away without changing anything.
Replies from: tailcalled↑ comment by tailcalled · 2022-04-18T21:33:43.930Z · LW(p) · GW(p)
If by "coinciding for decisions that are in the support" you mean what I think that means, then that's true re: actions that never happen, but it's not clear why actions that never happen should influence your assessment of how a decision theory works. Implicitly when you do anything probabilistic you assume that sets of null measure can be thrown away without changing anything.
Issue is you need to actually condition on the actions that never happen to decide what their expected utility would be, which is necessary to decide not to take them.
Replies from: ege-erdil↑ comment by Ege Erdil (ege-erdil) · 2022-04-18T21:45:11.572Z · LW(p) · GW(p)
I don't think this is a real world problem, because you can just do some kind of relaxation by adding random noise to your actions and then let the standard deviation go to zero. In practice there aren't perfectly deterministic systems anyway.
It's likely that some strategy like that also works in theory & has already been worked out by someone, but in any event it doesn't seem like a serious obstacle unless the "renormalization" ends up being dependent on which procedure you pick, which seems unlikely.
Replies from: tailcalled↑ comment by tailcalled · 2022-04-18T21:57:21.323Z · LW(p) · GW(p)
This is called epsilon-exploration in RL.
Replies from: ege-erdil↑ comment by Ege Erdil (ege-erdil) · 2022-04-18T22:05:01.256Z · LW(p) · GW(p)
I think epsilon-exploration is done for different reasons, but there are a bunch of cases in which "add some noise and then let the noise go to zero" is a viable strategy to solve problems. Here it's done mainly to sidestep an issue of "dividing by zero", which makes me think that there's some kind of argument which sidesteps it by using limits or something like that. It feels similar to what happens when you try to divide by zero when differentiating a function.
The RL case is different and is more reminiscent of e.g. simulated annealing, where adding noise to an optimization procedure and letting the noise tend to zero over time improves performance compared to a more greedy approach. I don't think these are quite the same thing as what's happening with the EDT situation here, it seems to me like an application of the same technique for quite different purposes.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2022-04-19T00:20:40.713Z · LW(p) · GW(p)
Here it’s done mainly to sidestep an issue of “dividing by zero”, which makes me think that there’s some kind of argument which sidesteps it by using limits or something like that.
Here's my attempt at sidestepping: EDT solves 5 and 10 with conditional oracles [AF · GW].
comment by tailcalled · 2022-04-17T20:10:28.476Z · LW(p) · GW(p)
I believe the notion of causality that is relevant to EDT is the “right” one, because causality seems like a concept developed to make and understand decisions (both over evolutionary time and more importantly over cultural evolution) rather than something ontologically fundamental that is needed to even define a correct decision.
🤔
One distinction I would like to draw is between the causality related to your decisions, and the causality related to everything else; "decision counterfactuals vs theory counterfactuals [LW · GW]". Your argument seems to show that decision counterfactuals are well handled by EDT.
But one type of decision we need to make is about what sort of information to go out and gather. And one type of information we might want to go out and gather would be causal information, not necessarily about the effects of our decisions, but instead about the effects of variables in the world on each other.
This seems like a place where EDT and CDT could end up differing, though it seems quite hard to model, so it's hard to know.
comment by Richard_Kennaway · 2022-04-17T18:13:13.158Z · LW(p) · GW(p)
Where does the EDT-ist get these causal diagrams? This looks like changing the definition of EDT to include causation, the absence of which was what the term "EDT" was coined for.
Replies from: ege-erdil↑ comment by Ege Erdil (ege-erdil) · 2022-04-17T18:50:58.366Z · LW(p) · GW(p)
My reading of the post is that causal diagrams to an EDT-ist just represent the conjunction of certain kinds of conditional independence statements between different variables. I don't see how one needs to be a CDT-ist to believe that some random variables can be independent while others aren't.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2022-04-17T19:51:48.648Z · LW(p) · GW(p)
My reading of the post is that causal diagrams to an EDT-ist just represent the conjunction of certain kinds of conditional independence statements between different variables.
Causal diagrams do not represent the conjunction of certain kinds of conditional independence statements between different variables. See Ilya's comment.
Replies from: ege-erdil↑ comment by Ege Erdil (ege-erdil) · 2022-04-17T20:04:13.730Z · LW(p) · GW(p)
They do. I didn't say the correspondence is bijective, and I don't think the post says this either.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2022-04-17T20:23:23.433Z · LW(p) · GW(p)
They do not. Causal diagrams represent causal relationships between variables. Given certain assumptions (like the Markov and Faithfulness properties), a given causal diagram may be consistent or not with various structures of dependence among the variables. Those structures may be representable by DAGs (which in this role are called Bayesian networks), but without the causal interpretation, which is something separate from the statistics, a Bayesian network is not a causal DAG. Neither type of DAG is a representation of the other.
Replies from: tailcalled, ege-erdil↑ comment by tailcalled · 2022-04-17T20:30:10.667Z · LW(p) · GW(p)
The faithfulness property is necessary for the causal graph to also capture the dependence relationships, but not for it to capture independence relationships.
I'm confused about what you mean by the Markov property, I was under the impression that this property is generally required for causal diagrams to be considered true. Looking it up, perhaps you're talking about correlated error terms? Or is it something else?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2022-04-18T12:48:25.519Z · LW(p) · GW(p)
I meant the same Markov property as you refer to. Yes, it is generally assumed. You can't do much with causal diagrams without it. Faithfulness is less assumed than Markov, but both, when made, are explicit assumptions/requirements/axioms/hypotheses/whatever.
↑ comment by Ege Erdil (ege-erdil) · 2022-04-17T20:35:14.229Z · LW(p) · GW(p)
I think the post is using "causal graph" to refer to Bayesian networks, since the comments about conditional independence etc. don't make sense otherwise. Your original point about "where does the EDT-ist get these causal diagrams" is beside the point. Paul's point is that the EDT-ist has no causal diagrams, but if you imagine that there's a CDT-ist armed with a Bayes net model which includes the determinants of his own decision, then the EDT-ist will make the same decisions as him using only the conditional dependence structure implied by the Bayes net.
If you're using a different terminology that's fine, but the point made in the post is still valid with the Bayesian network interpretation and doesn't depend on this terminological objection.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2022-04-18T12:41:49.520Z · LW(p) · GW(p)
I think the post is using "causal graph" to refer to Bayesian networks
That is, the post displays exactly the confusion that Ilya mentioned.
a CDT-ist armed with a Bayes net model which includes the determinants of his own decision
That is stepping outside of what CDT is.
There is a deplorable tendency in these discussions for people to redefine basic terms to mean things different from what the terms were originally coined to mean. "EDT" and "CDT" already mean things. EDT knows nothing of causation and CDT knows nothing of including the deciding agent in the causal graph. This is why EDT fails on the Smoking Lesion and CDT fails on Newcomb. Redefining the two terms to mean the same thing does not change the fact that the decision theories they originally named are not the same thing, any more than writing "London" and "Paris" next to Berlin on a map will make London and Paris the same city in Germany. All it does is degrade the usefulness of the map.
Instead of redefining the terms to mean whatever improved decision theory one comes up with, it would be better to come up with that improved theory and give it a new name. See, for example, TDT, UDT, etc.
Replies from: ege-erdil↑ comment by Ege Erdil (ege-erdil) · 2022-04-18T18:58:31.226Z · LW(p) · GW(p)
That is, the post displays exactly the confusion that Ilya mentioned.
As Paul has pointed out in the comments, the "confusion" in the post amounts to nothing more than a terminological dispute as far as I can see. It's not a dispute over what CDT or EDT mean; it's a dispute over what "causal network" means, and as far as I can see it's irrelevant to the thrust of Paul's argument.
That is stepping outside of what CDT is.
How? CDT is totally consistent with a situation in which you include yourself in your model. I can have a model (which I can't compute explicitly) in which my actions are all caused by some inputs, but the algorithm I use to make decisions is "which action gets me the highest expected utility if I condition on that action's do operator?"
This means I effectively ignore the causal determinants of my own decision when making the decision, but that doesn't mean my model of the world must be ignorant of them.
EDT knows nothing of causation...
This is Paul's whole point.
...and CDT knows nothing of including the deciding agent in the causal graph.
I've already responded to this above.
This is why EDT fails on the Smoking Lesion...
Paul's point is that EDT fails on the smoking lesion problem if the EDT-ist neglects to condition on all the facts that he knows about the situation. If the EDT-ist correctly conditions on their utility function, they'll notice that there's actually no correlation among people with that utility function between smoking and lesions, so they'll correctly decide to smoke if they think it's positive expected utility.
Since Paul's argument re: equivalence between CDT and EDT under his conditions is sound, it really has to be like this. The apparent failure of EDT has to go away once the problem is sufficiently formalized such that the EDT-ist can condition on all inputs to their decision process. However, Paul also says that CDT fails more gracefully than EDT, in the sense that if the EDT-ist neglects to condition on some relevant facts then they can fall into the trap of not smoking in the smoking lesion problem. CDT is more robust to this kind of failure.
Redefining the two terms to mean the same thing does not change the fact that the decision theories they originally named are not the same thing, any more than writing "London" and "Paris" next to Berlin on a map will make London and Paris the same city in Germany. All it does is degrade the usefulness of the map.
Paul doesn't redefine either EDT or CDT, so I don't know what you're talking about here.
Instead of redefining the terms to mean whatever improved decision theory one comes up with, it would be better to come up with that improved theory and give it a new name. See, for example, TDT, UDT, etc.
I agree, but Paul hasn't come up with an improved decision theory, so I don't see why he should invent a new label for a new theory that doesn't exist.