Can Counterfactuals Be True?
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-24T04:40:49.000Z · LW · GW · Legacy · 47 commentsContents
47 comments
Followup to: Probability is Subjectively Objective
The classic explanation of counterfactuals begins with this distinction:
- If Lee Harvey Oswald didn't shoot John F. Kennedy, then someone else did.
- If Lee Harvey Oswald hadn't shot John F. Kennedy, someone else would have.
In ordinary usage we would agree with the first statement, but not the second (I hope).
If, somehow, we learn the definite fact that Oswald did not shoot Kennedy, then someone else must have done so, since Kennedy was in fact shot.
But if we went back in time and removed Oswald, while leaving everything else the same, then—unless you believe there was a conspiracy—there's no particular reason to believe Kennedy would be shot:
We start by imagining the same historical situation that existed in 1963—by a further act of imagination, we remove Oswald from our vision—we run forward the laws that we think govern the world—visualize Kennedy parading through in his limousine—and find that, in our imagination, no one shoots Kennedy.
It's an interesting question whether counterfactuals can be true or false. We never get to experience them directly.
If we disagree on what would have happened if Oswald hadn't been there, what experiment could we perform to find out which of us is right?
And if the counterfactual is something unphysical—like, "If gravity had stopped working three days ago, the Sun would have exploded"—then there aren't even any alternate histories out there to provide a truth-value.
It's not as simple as saying that if the bucket contains three pebbles, and the pasture contains three sheep, the bucket is true.
Since the counterfactual event only exists in your imagination, how can it be true or false?
So... is it just as fair to say that "If Oswald hadn't shot Kennedy, the Sun would have exploded"?
After all, the event only exists in our imaginations—surely that means it's subjective, so we can say anything we like?
But so long as we have a lawful specification of how counterfactuals are constructed—a lawful computational procedure—then the counterfactual result of removing Oswald, depends entirely on the empirical state of the world.
If there was no conspiracy, then any reasonable computational procedure that simulates removing Oswald's bullet from the course of history, ought to return an answer of Kennedy not getting shot.
"Reasonable!" you say. "Ought!" you say.
But that's not the point; the point is that if you do pick some fixed computational procedure, whether it is reasonable or not, then either it will say that Kennedy gets shot, or not, and what it says will depend on the empirical state of the world. So that, if you tell me, "I believe that this-and-such counterfactual construal, run over Oswald's removal, preserves Kennedy's life", then I can deduce that you don't believe in the conspiracy.
Indeed, so long as we take this computational procedure as fixed, then the actual state of the world (which either does include a conspiracy, or does not) presents a ready truth-value for the output of the counterfactual.
In general, if you give me a fixed computational procedure, like "multiply by 7 and add 5", and then you point to a 6-sided die underneath a cup, and say, "The result-of-procedure is 26!" then it's not hard at all to assign a truth value to this statement. Even if the actual die under the cup only ever takes on the values between 1 and 6, so that "26" is not found anywhere under the cup. The statement is still true if and only if the die is showing 3; that is its empirical truth-condition.
And what about the statement ((3 * 7) + 5) = 26? Where is the truth-condition for that statement located? This I don't know; but I am nonetheless quite confident that it is true. Even though I am not confident that this 'true' means exactly the same thing as the 'true' in "the bucket is 'true' when it contains the same number of pebbles as sheep in the pasture".
So if someone I trust—presumably someone I really trust—tells me, "If Oswald hadn't shot Kennedy, someone else would have", and I believe this statement, then I believe the empirical reality is such as to make the counterfactual computation come out this way. Which would seem to imply the conspiracy. And I will anticipate accordingly.
Or if I find out that there was a conspiracy, then this will confirm the truth-condition of the counterfactual—which might make a bit more sense than saying, "Confirm that the counterfactual is true."
But how do you actually compute a counterfactual? For this you must consult Judea Pearl. Roughly speaking, you perform surgery on graphical models of causal processes; you sever some variables from their ordinary parents and surgically set them to new values, and then recalculate the probability distribution.
There are other ways of defining counterfactuals, but I confess they all strike me as entirely odd. Even worse, you have philosophers arguing over what the value of a counterfactual really is or really means, as if there were some counterfactual world actually floating out there in the philosophical void. If you think I'm attacking a strawperson here, I invite you to consult the philosophical literature on Newcomb's Problem.
A lot of philosophy seems to me to suffer from "naive philosophical realism"—the belief that philosophical debates are about things that automatically and directly exist as propertied objects floating out there in the void.
You can talk about an ideal computation, or an ideal process, that would ideally be applied to the empirical world. You can talk about your uncertain beliefs about the output of this ideal computation, or the result of the ideal process.
So long as the computation is fixed, and so long as the computational itself is only over actually existent things. Or the results of other computations previously defined—you should not have your computation be over "nearby possible worlds" unless you can tell me how to compute those, as well.
A chief sign of naive philosophical realism is that it does not tell you how to write a computer program that computes the objects of its discussion.
I have yet to see a camera that peers into "nearby possible worlds"—so even after you've analyzed counterfactuals in terms of "nearby possible worlds", I still can't write an AI that computes counterfactuals.
But Judea Pearl tells me just how to compute a counterfactual, given only my beliefs about the actual world.
I strongly privilege the real world that actually exists, and to a slightly lesser degree, logical truths about mathematical objects (preferably finite ones). Anything else you want to talk about, I need to figure out how to describe in terms of the first two—for example, as the output of an ideal computation run over the empirical state of the real universe.
The absence of this requirement as a condition, or at least a goal, of modern philosophy, is one of the primary reasons why modern philosophy is often surprisingly useless in my AI work. I've read whole books about decision theory that take counterfactual distributions as givens, and never tell you how to compute the counterfactuals.
Oh, and to talk about "the probability that John F. Kennedy was shot, given that Lee Harvey Oswald didn't shoot him", we write:
P(Kennedy_shot|Oswald_not)
And to talk about "the probability that John F. Kennedy would have been shot, if Lee Harvey Oswald hadn't shot him", we write:
P(Oswald_not []-> Kennedy_shot)
That little symbol there is supposed to be a box with an arrow coming out of it, but I don't think Unicode has it.
Part of The Metaethics Sequence
Next post: "Math is Subjunctively Objective"
Previous post: "Existential Angst Factory"
47 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Xianhang · 2008-07-24T09:22:42.000Z · LW(p) · GW(p)
But hang on, the foundation of Bayesianism is the counterfactual. P(A|B) = 0.6 means that "If B were true, then P(A) = 0.6 would be true". Where does the truth value of P(A) = 0.6 come from then if we are to accept Bayesianism as correct?
comment by Allan_Crossman · 2008-07-24T11:53:13.000Z · LW(p) · GW(p)
Oh, and to talk about "the probability that John F. Kennedy was shot, given that Lee Harvey Oswald didn't shoot him", we write:
P(Kennedy_shot|Oswald_not)
If I've understood you, this is supposed to be a high value near 1. I'm just a noob at Bayesian analysis or Bayesian anything, so this was confusing me until I realised I also had to include all the other information I know: i.e. all the reports I've heard that Kennedy actually was shot, that someone else became president, and so on.
It seems like this would be a case where it's genuinely helpful to include that background information:
P(Kennedy_shot | Oswald_not & Reports_of_Kennedy_shot) = 1 or thereabouts
And to talk about "the probability that John F. Kennedy would have been shot, if Lee Harvey Oswald hadn't shot him", we write:
P(Oswald_not []-> Kennedy_shot)
Presumably this is the case where we pretend that all that background knowledge has been discarded?
P(Kennedy_shot | Oswald_not & no_knowledge_of_anything_after_October_1963) = 0.05 or something?
comment by Allan_Crossman · 2008-07-24T12:01:31.000Z · LW(p) · GW(p)
Hmm, the second bit I just wrote isn't going to work, I suppose, since your knowledge of what came after the event will affect whether you believe in a conspiracy or not...
comment by Chris_Hibbert · 2008-07-24T13:29:30.000Z · LW(p) · GW(p)
Contrary to your usual practice of including voluminous relevant links, you didn't point to anything specific for Judea Pearl. Let's give this link for his book Causality, which is where people will find the graphical calculus you rely on.
You've mentioned Pearl before, but haven't blogged the details. Do you expect to digest Pearl's graphical approach into something OB-readers will be able to understand in one sitting at some point? That would be a real service, imho.
Replies from: buybuydandavis↑ comment by buybuydandavis · 2011-10-18T09:53:23.809Z · LW(p) · GW(p)
There is a second edition of the book now, with substantial updates, that came out in 2009.
comment by RobinHanson · 2008-07-24T13:55:33.000Z · LW(p) · GW(p)
But Judea Pearl tells me just how to compute a counterfactual, given only my beliefs about the actual world.
With Xianhang, I don't understand how you can talk about probabilities without talking about several possible worlds.
comment by Constant2 · 2008-07-24T14:16:03.000Z · LW(p) · GW(p)
how you can talk about probabilities without talking about several possible worlds
But if probability is in the mind, and the mind in question is in this world, why are other worlds needed? Moreover (from wikipedia):
In Bayesian theory, the assessment of probability can be approached in several ways. One is based on betting: the degree of belief in a proposition is reflected in the odds that the assessor is willing to bet on the success of a trial of its truth.
Disposition to bet surely does not require a commitment to possible worlds.
comment by Laura B (Lara_Foster) · 2008-07-24T16:06:24.000Z · LW(p) · GW(p)
Eliezer: "A lot of philosophy seems to me to suffer from "naive philosophical realism" - the belief that philosophical debates are about things that automatically and directly exist as propertied objects floating out there in the void."
Well, we do this since we can't fully apprehend reality as a whole, and so must break it down into more manageable components.
I can tell my husband, "If you had not fed the cat, then I would have fed her," because I had such an intention from before the cat was fed, and I remember having said intention.
However, can I really ever know what would have happened if my husband had not fed the cat? Maybe the sun would have exploded, because reality is deterministic, and there was nothing else that could have happened, and such a blatent violation of causality would have blown open the fabric of time...
But this is not an at all useful way of thinking about the world. If we submit to fatalism then there isn't much point in trying to determine anything ourselves, including how to create an fAI and live forever... If we do it, we do it.
Likewise, counterfactuals are important for considering how different variables in our actions shape their outcomes.
"If you had hit me, I would have left you and not come back." Could be true... could be false... What's important? Don't hit her next time!
"If Oswald had not shot Kennedy, then someone else would have." What's important? We need to be very careful in protecting the safety of progressive politicians in the future, like Obama, so they don't go the way of the Kennedy... After all, it happened to Bobby too.
comment by michael_vassar3 · 2008-07-24T16:35:45.000Z · LW(p) · GW(p)
"But this is not an at all useful way of thinking about the world. If we submit to fatalism then there isn't much point in trying to determine anything ourselves, including how to create an fAI and live forever... If we do it, we do it."
If we do it we do it BECAUSE we try to do it, BECAUSE we try to determine it, etc. The archives here in 2008 are largely about how to deal with living in a lawful universe.
comment by Silas · 2008-07-24T16:39:24.000Z · LW(p) · GW(p)
"If the federal government hadn't bought so much stuff from GM, GM would be a lot smaller today." "If the federal government hadn't bought so much stuff from GM, GM would have instead been tooling up to produce stuff other buyers did want and thus could very well have become successful that way."
???
comment by HalFinney · 2008-07-24T18:42:49.000Z · LW(p) · GW(p)
The problem with imagining Oswald not killing Kennedy is that we have to figure why he didn't do so. He did it for a reason. Was that reason absent? Was Oswald different in some way?
There is a chain of events before the one that we imagine changed, just as there is a chain of events after. And when we snip out that one event, it would have implications backwards in time as well as forwards. I guess this is where this Pearl model is supposed to tell us how to think about snipping out that event, with minimal backwards-in-time changes - but I can't help wondering if that isn't an arbitrary measure. Why not snip out the event with minimal forwards-in-time changes? Or why not minimize the sum of forwards and backwards changes? That last metric would surely lead to Kennedy being killed in much the same way by another person, since Kennedy living would likely have led to enormous future changes.
In more detail, suppose there was in fact no conspiracy and Oswald was a lone, self-motivated individual. It might still turn out that the simplest way to imagine what would have happened if Oswald had not killed Kennedy, would be to imagine that there was in fact a conspiracy, and they found someone else, who did the job in the same way. That would arguably be the change which would minimize total forward and backward alterations to the timeline.
comment by Laura B (Lara_Foster) · 2008-07-24T18:44:28.000Z · LW(p) · GW(p)
Michael- my main point was that having ways of thinking about the world that are not the world itself are still useful, while trying to only see the world as it is and nothing else is not.
Yes- it's useful BECAUSE it makes us try to do it, which if we do it, we do it BECAUSE we try to determine it, which we would NOT do if we didn't consider the world as it could be, but only as it has been and as it will be.
comment by Allan_Crossman · 2008-07-24T21:04:23.000Z · LW(p) · GW(p)
In more detail, suppose there was in fact no conspiracy and Oswald was a lone, self-motivated individual. It might still turn out that the simplest way to imagine what would have happened if Oswald had not killed Kennedy, would be to imagine that there was in fact a conspiracy, and they found someone else, who did the job in the same way. That would arguably be the change which would minimize total forward and backward alterations to the timeline.
Hal: what you describe is called "backtracking" in the philosophical literature. It's not usually seen as legitimate, I think mostly because it doesn't correspond to what a sentence like "if X had occurred, then Y would have occurred" actually means in normal usage.
I mean, it's a really weird analysis that says "there really was no conspiracy, so if Oswald hadn't shot Kennedy, there would have been a conspiracy, and Kennedy would have been shot." :)
comment by Everett (Stephen_Weeks) · 2008-07-24T21:29:39.000Z · LW(p) · GW(p)
You could always just juxtapose a box and an arrow: □→
comment by Caledonian2 · 2008-07-24T23:19:57.000Z · LW(p) · GW(p)
The archives here in 2008 are largely about how to deal with living in a lawful universe.But isn't that much of the problem? We don't live in a lawful universe - or rather, 'lawful' doesn't mean what Eliezer thinks it does.
comment by TGGP4 · 2008-07-25T02:24:09.000Z · LW(p) · GW(p)
Lara Foster, consider the possibility that Obama's assassination would be a boon to progressivism, as JFK's death was.
Caledonian, Eliezer has discussed the lawful nature of the universe many times before. If you have some specific disagreement, I find it odd you did not express it then. Could you elaborate?
comment by Kenny_Easwaran2 · 2008-07-25T07:33:42.000Z · LW(p) · GW(p)
In this sort of case the motivation for the philosophical realism is that there is a real grammatical construction that's actually used in the world, and we're wondering what it's actual truth-conditions are. Maybe the best model for the semantics of this construction will involve graphical models a la Pearl. Or maybe the best semantics will require that we postulate things called possible worlds that have certain ordering properties. In either case, there will then still be a question whether the sentences ever come out true on the best semantics, or whether we're implicitly committed to certain falsehoods every time we utter the sentence. But it's far from obvious that the proper account of counterfactuals in ordinary language is in terms of some sort of computational procedure.
And actually, indicative conditionals seem to be even more problematic than counterfactuals. There's lots of explaining you need to do to just assimilate them to the material conditional, and if you try to connect them to conditional probabilities then you have to say what truth-values between 0 and 1 mean, and get around all the triviality proofs.
Conditionals in general are quite hard to deal with. But this doesn't mean you can just get away with dismissing them and say that they're never true, because of some naive commitment to everything mentioned in your semantics having to be observable in some nice direct way. If they're not true, then you have to do lots of work to say just what it is that we're doing when we go around uttering them.
comment by Caledonian2 · 2008-07-25T10:57:08.000Z · LW(p) · GW(p)
If you have some specific disagreement, I find it odd you did not express it then.I have expressed it, previously.
And when the points have come up before, I've criticized them. Eliezer seems to have a very deep need for known absolutes. But the 'absolutes' he references are contingent and uncertain. For example, he frequently conflates the nature of the universe and our ideas about what the nature of the universe is. The first is consistent and universal, while the second is not, but he persists in speaking as though we had access to eternal truth. All we have is our experiences and our attempts to account for them.
Or look at this case, in which he confuses ultimate reality and our attempts to model it. The question he's asking is absurd, because the concept he's using is meaningless outside of the context it's defined in. "If X, then Y" statements are dependent upon our models - they can be said to be true to the degree that they express the output of our understanding. "If I drop an egg on the floor, then it will break" is an essentially-accurate claim even if I don't get around to dropping an egg.
comment by Laura B (Lara_Foster) · 2008-07-25T15:37:19.000Z · LW(p) · GW(p)
TGGP- While JFK's assassination may or may not (LBJ???) have been good for progressivism, RFK's was certainly NOT. Nixon won, and then we had drug schedules, and watergate, and all that bullshit...
Here's a counterfactual to consider: What would the world have been like if Bobby Kennedy had been president instead of Nixon?
Still think it would be a good thing for progressivism if Obama is shot and McCaine becomse prez?
comment by J_Thomas2 · 2008-07-26T15:28:59.000Z · LW(p) · GW(p)
Lots of counterfactuals are statements about how we think the world is connected together. We tell stories. We make analogies and metaphors.
Do you think Oswald killed Kennedy because he had free will and he chose to independent of anything else? If Kennedy died because somebody powerful and important wanted to have him killed, then if Oswald hadn't taken the job somebody else surely would have. Maybe somebody else did and we picked up Oswald by mistake.
If Oswald did it for reasons that might influence other people too, then somebody else might very well have been influenced similarly to Oswald. Were there 5 independent lone nuts ready to kill Kennedy? 50? 500? We don't know because the first one to do it, succeeded. If we could re-run the experiment a hundred times and it was Oswald 20% of the time versus Oswald 100% of the time, that would tell us something. If it was Oswald 30% of the time and nobody 70%, that would tell us something else. But we can't rerun the experiment even once.
We can say what we think might have happened, based on our experience with other things applied as similes and metaphors to that one. It isn't reliable. But it's central to the way we think all the time.
comment by J_Thomas2 · 2008-07-26T15:33:47.000Z · LW(p) · GW(p)
Lara Foster, consider the possibility that Obama's assassination would be a boon to progressivism, as JFK's death was.
Where is your evidence that JFK's death helped "progressivism"?
How do you know what would have happened to progressivism if JFK had died considerably later?
Isn't this an example of the fallacy we're talking about?
Replies from: waveman↑ comment by waveman · 2016-06-29T22:58:33.235Z · LW(p) · GW(p)
I think the idea here is likely that JFK was not very effective in getting his legislation through congress. When Johnson took over, he ran with the same agenda more or less, but he was much more effective in getting laws through congress.
comment by Ilya_Shpitser · 2008-07-27T01:54:55.000Z · LW(p) · GW(p)
"But Judea Pearl tells me just how to compute a counterfactual, given only my beliefs about the actual world."
This is actually a subtle issue. The procedure given in the book assumes (a) full knowledge of the precise causal mechanisms (which you never know in practice) and (b) the distribution over all unobserved variables (which you don't know by definition). Surprisingly, it is possible to compute certain counterfactuals using ONLY the distribution over observable variables (which is typically what you get). You can check my thesis for details if you wish.
comment by TheOtherDave · 2010-11-08T17:12:22.602Z · LW(p) · GW(p)
Completely tangential: many, many years ago, shortly after reading Hofstadter's discussion of counterfactuals in GEB that starts with him driving through a cloud of bees, I accidentally dropped a bowl on the ground.
The bowl shattered into fragments, and I said "Well, it's a good thing I'm me instead of being that bowl!"
It was quite some time before I lived that down.
comment by FrankAdamek · 2010-12-01T02:52:37.299Z · LW(p) · GW(p)
I was somewhat confused by this post, the language of it. I resolved my confusion, which came through applying more quantitative reasoning to it. Yet, my revised view is the largest deviation from (the appearance of) Eliezer's message I've yet had that has stuck around; past such differences revealed themselves to be flawed under more reflection. (If this view had been internalized it could generate the wording in the post, but the wording of the post doesn't seem to strongly imply it.) I know that going forward I'm going to hold my revised interpretation unless some other evidence presents itself, so I'm going to stick my neck out here in the case that I am mistaken and someone can correct me.
But so long as we have a lawful specification of how counterfactuals are constructed - a lawful computational procedure - then the counterfactual result of removing Oswald, depends entirely on the empirical state of the world.
A counterfactual does not depend (directly) on the actual state of the world, but on one's model of the world. Given a model of the world and a method of calculating counterfactuals we can say whether a counterfactual is mathematically or logically correct. But like the phrase "'snow is white' is true," or "the bucket is true," we can also put forward the proposition that the actual world corresponds to such a state, that our models match up to reality, and ask how likely it is that this proposition is true, with probabilistic beliefs. So we can assign probabilities to the proposition "'If Oswald hadn't shot Kennedy, Kennedy would not have died' is true."
From the post Qualitatively Confused:
To make a long story short, it turns out that there's a very natural way of scoring the accuracy of a probability assignment, as compared to reality: just take the logarithm of the probability assigned to the real state of affairs.
(emphasis added)
We never actually receive "confirmation" that "snow is white," at least in the sense of obtaining a probability of exactly 1 that "'snow is white' is true". Likewise, we never receive confirmation that a counterfactual is true; we just increase the probabilities we assign to it being true.
(I worked out some simple math that allows you to score a probabilistic expectation even if you can't gain full confirmation of what happened, which does nice things like having the same expected score of the basic model, but I won't go into it here, assuming that such math already exists and wasn't the point. I don't disagree with presenting the simpler model, as it works just fine.)
comment by drnickbone · 2012-03-16T17:14:08.196Z · LW(p) · GW(p)
A small comment... Pearl's treatment works fine for "forward-tracking" counterfactuals, where the only allowed changes in the counterfactual world are in the future of the change (i.e. after the point of surgery). However, regular counterfactuals require a bit of "back-tracking" to make the counterfactual scenario plausible in the first place.
Consider these statements:
- "If Gore had been president during 9/11 he'd have reacted differently, and the US wouldn't have invaded Iraq"
- "If Gore had been president during 9/11 then he'd have been sworn in as president back in January 2001"
- "If Gore had been president during 9/11 he'd have been blinking in shock at having suddenly teleported to Air Force One, and wondering why everyone was calling him Mr President"
- "If Gore had been president during 9/11 he'd have been recently sworn in as president, owing to changing party, Dick Cheney dying of a heart attack, Bush appointing him as vice-president, and then himself dying of shock".
Statement 1 is a forward-tracking counterfactual and arguably true. But the usual way of understanding it to be true assumes that 2 is also true, which is a back-tracking counterfactual (it involves re-writing approximately a year of history before 9/11).
Statement 3 is the only statement consistent with no back-tracking at all, and corresponds to Pearl's approach of performing surgery on the causal graph at 9/11. (This is generally physically impossible, or at least physically absurd, since it involves tearing the graph apart and inserting a new state with no causal relation to the previous states.)
Statement 4 is an odd sort of compromise; it's at least not physically impossible or absurd, and involves the bare minimum of back-tracking (to a political crisis a few days before 9/11). But it is clearly not the best way to understand the counterfactual.
Replies from: None↑ comment by [deleted] · 2012-03-16T18:22:32.293Z · LW(p) · GW(p)
My feeling is this is only a problem with expressing counterfactuals in English. If one did have a causal model of American history, 2000-2001, and one wanted to implement counterfactual 1, performing surgery at 9/11 would be unsound, for the reasons you state. The joint probability of the ancestors of 9/11 after such a transformation would all be very small indeed, relative to whatever vastly improbable events were necessary to transition Al Gore circa 9/10 to president during 9/11.
Is this an actual limitation of the calculus, though? Are there counterfactuals that are well-posed, but require an indefinite amount of "back-tracking"?
Replies from: drnickbone↑ comment by drnickbone · 2012-03-16T22:19:39.163Z · LW(p) · GW(p)
I'm not sure the problem is with English...
The issue arises whenever we have a causal model with a large number of micro-states, and the antecedent of a counterfactual can only be realised in worlds which change lots of different micro-states. The most "natural" way of thinking about the counterfactual in that case is still to make a minimal change (to one single micro state e.g. a particle decaying somewhere, or an atom shifting an angstrom somewhere) and to make it sufficiently far back in time to make a difference. (In the Gore case, in the brain of whoever thought up the butterfly ballot, or perhaps in the brain of a justice of the Supreme Court.) The problem with Pearl's calculus though is that it doesn't do that.
Here's a toy model to demonstrate (no English). Consider the following set of structural equations (among Boolean micro state variables):
X = 0
Y_1 = X, Y_2 = X, ..., Y_10^30 = X
The model is deterministic so P[X = 0] = 1.
Next we define a "macro-state" variable Z := (Y_1 + Y2 + ... + Y 10^30) / 10^30. Plainly in the actual outcome Z = 0 and indeed P[Z = 0] = 1.
But what if Z were equal to 1?
My understanding of Pearl's semantics is that to evaluate this we have to intervene i.e. do(Z = 1) and this is equivalent to the multi-point intervention do(Y_1 = 1 & Y_2 = 1 & ... & Y_10^30 = 1). This is achieved by replacing every structural equation between X and Y_i by the static equation Y_i = 1.
Importantly, it is NOT achieved by the single-point intervention X = 1, even though that is probably the most "natural" way to realise the counterfactual. So in Pearl's notation, we must have ~X _ (Z = 1) or in probabilistic terms P[X = 0 | do(Z = 1)] = 1. Which, to be frank, seems wrong.
And we can't "fix" this in Pearl's semantics by choosing the alternative surgery (X = 1) because if P[X = 1 | do(Z = 1)] = 1 that would imply in Pearl's semantics that X is caused by the Yi, rather than the other way round, which is clearly wrong since it contradicts the original causal graph. Worse, even if we introduce some ambiguity, saying that X might change under the intervention do(Z = 1), then we will still have P[X = 1 | do(Z = 1)] > 0 = P[X = 1] and this is enough to imply a probabilistic causal link from the Y_i to X which is still contrary to the causal graph.
So I think this is a case where Pearl's analysis gets it wrong.
Replies from: None↑ comment by [deleted] · 2012-03-17T01:22:03.026Z · LW(p) · GW(p)
Before I analyze this apparent paradox in any depth, I want to be sure I understand your criticism. There are three things about this comment on which I am unclear.
1.) The number of states cannot be relevant to the paradox from a theoretical standpoint, because nothing in Pearl's calculus depends on the number of states. If this does pose a problem, it only poses a problem in so far as it creates an apparent paradox, that is, whatever algorithm humans use to parse the counterfactual "What if Z were 1?" is different from the Pearl's calculus. A priori, this is not a dealbreaker, unless it can also be shown the human algorithm does better.
2.) If Yi = X, then there is a causal link between Yi and X. Indeed, there is a causal link between every X and every Yi. Conditioning on any of the Yi immediately fixes the value of every other variable.
3.) You say the problem isn't with English, but then talk about "the most natural way to realize a counterfactual." I don't know what that means, other than as an artifact of the human causal learning algorithm.
Or am I misunderstanding you completely?
Replies from: drnickbone↑ comment by drnickbone · 2012-03-17T16:48:13.739Z · LW(p) · GW(p)
Thanks for taking the time to think/comment. It may help us to fix a reference which describes Pearl's thinking and his calculus. There are several of his papers available online, but this one is pretty comprehensive: ftp://ftp.cs.ucla.edu/pub/stat_ser/r284-reprint.pdf "Bayesianism and Causality, Or, Why I am only a Half-Bayesian".
Now onto your points:
1) You are correct that nothing in Pearl's calculus varies depending on the number of variables Yi which causally depend on X. For any number of Yi, the intervention do(Z = 1) breaks all the links between X and the Yi and doesn't change the vale of X at all. Also, there is no "paradox" within Pearl's calculus here: it is internally consistent.
The real difficulty is that the calculus just doesn't work as a full conceptual analysis of counterfactuals, and this becomes increasingly clear the more Yi variables we add. It is a bit unfortunate, because while the calculus is elegant in its own terms, it does appears that conceptual analysis is what Pearl was attempting. He really did intend his "do" calculus to reflect how we usually understand counterfactuals, only stated more precisely. Pearl was not consciously proposing a "revisionist" account to the effect: "This is how I'm going to define counterfactuals for the sake of getting some math to work. If your existing definition or intuition about counterfactuals doesn't match that definition, then sorry, but it still won't affect my definition." Accordingly, it doesn't help to say "Regular intuitions say one thing, Pearl's calculus says another, but the calculus is better, therefore the calculus is right and intuitions are wrong". You can get away with that in revisionist accounts/definitions but not in regular conceptual analysis.
2) The structural equations do indeed imply there is a causal link from the X to the Yi. But there is NO causal link in the opposite direction from the Yi to the X, or from any Yi to any Yj. The causal graph is directed, and the structural equations are asymmetric. Note that in Pearl's models, the structural equation Yi = X is different from the reverse structural equation X = Yi, even though in regular logic and probability theory these are equivalent. This point is really quite essential to Pearl's treatment, and is made clear by the referenced document.
3) See point 1. Pearl's calculus is trying to analyse counterfactuals (and causal relations) as we usually understand them, not to propose a revisionist account. So evidence about how we (naturally) interpret counterfactuals (in both the Gore case and the X, Y case) is entirely relevant here.
Incidentally, if you want my one sentence view, I'd say that Pearl is correctly analysing a certain sort of counterfactual but not the general sort he thinks he is analysing. Consider these two counterfactuals:
If A were to happen, then B would happen.
If A were to be made to happen (by outside intervention) then B would happen.
I believe that these are different counterfactuals, with different antecedents, and so they can have different truth values. It looks to me like Pearl's "do" calculus correctly analyses the second sort of counterfactual, but not the first.
(Edited this comment to fix typos and a broken reference.)
Replies from: None, None↑ comment by [deleted] · 2012-03-17T20:30:47.975Z · LW(p) · GW(p)
"Bayesianism and Causality, Or, Why I am only a Half-Bayesian".
As a (mostly irrelevant) side note, this is Pearl_2001, who is a very different person from Pearl_2012.
Also, there is no "paradox" within Pearl's calculus here: it is internally consistent.
I'm using the word paradox in the sense of "puzzling conclusion", not "logical inconsistency." Hence "apparent paradox", which can't make sense in the context of the latter definition.
It is a bit unfortunate, because while the calculus is elegant in its own terms, it does appears that conceptual analysis is what Pearl was attempting. He really did intend his "do" calculus to reflect how we usually understand counterfactuals, only stated more precisely. Pearl was not consciously proposing a "revisionist" account to the effect: "This is how I'm going to define counterfactuals for the sake of getting some math to work. If your existing definition or intuition about counterfactuals doesn't match that definition, then sorry, but it still won't affect my definition."
The human causal algorithm is frequently, horrifically, wrong. A theory that attempts to model it is, I heavily suspect, less accurate than Pearl's theory as it stands, at least because it will frequently prefer to use the post hoc inference when it is more appropriate to infer a mutual cause.
Accordingly, it doesn't help to say "Regular intuitions say one thing, Pearl's calculus says another, but the calculus is better, therefore the calculus is right and intuitions are wrong". You can get away with that in revisionist accounts/definitions but not in regular conceptual analysis.
No, I didn't say that. In my earlier comments I wondered under what conditions the "natural" interpretation of counterfactuals was preferable. If regular intuition disagrees with Pearl, there are at least two possibilities: intuition is wrong (i.e., a bias exists) or Pearl's calculus does worse than intuition, which means the calculus needs to be updated. In a sense, the calculus is already a "revisionist" account of the human causal learning algorithm, though I disapprove of the connotations of "revisionist" and believe they don't apply here.
But there is NO causal link in the opposite direction from the Yi to the X, or from any Yi to any Yj. The causal graph is directed, and the structural equations are asymmetric.
Yes, but my question here was whether or not the graph model was accurate. Purely deterministic graph models are weird in that they are observationally equivalent not just with other graphs with the same v-structure, but with any graph with the same skeleton, and even worse, one can always add an arrow connecting the ends of any path. I understand better now that the only purpose behind a deterministic graph model is to fix one out of this vast set of observationally equivalent models. I was confused by the plethora of observationally equivalent deterministic graph models.
Incidentally, if you want my one sentence view, I'd say that Pearl is correctly analysing a certain sort of counterfactual but not the general sort he thinks he is analysing. Consider these two counterfactuals:
If A were to happen, then B would happen.
If A were to be made to happen (by outside intervention) then B would happen.
As far as I can tell, the first is given by P(B | A), and the second is P(B_A). Am I missing something really fundamental here?
I've done the calculations for your model, but I'm going to put them in a different comment to separate out mathematical issues from philosophical ones. This comment is already too long.
Replies from: drnickbone↑ comment by drnickbone · 2012-03-17T21:35:39.444Z · LW(p) · GW(p)
Couple of points. You say that "the human causal algorithm is frequently, horrifically, wrong".
But remember here that we are discussing the human counterfactual algorithm, and my understanding of the experimental evidence re counterfactual reasoning (e.g. on cases like Kennedy or Gore) is that it is pretty consistent across human subjects, and between "naive" subjects (taken straight off the street) vs "expert" subjects (who have been thinking seriously about the matter). There is also quite a lot of consistency on what constitues a "plausible" versus a "far out" counterfactual, and much stronger sense about what happens in the cases with plausible antecedents than in cases with weird antecedents (such as what Caesar would have done if fighting in Korea). It's also interesting that there are rather a lot of formal analyses which almost match the human algorithm, but not quite, and that there is quite a lot of consensus on the counter examples (that they genuinely are counter examples, and that the formal analysis gets it wrong).
What pretty much everyone agrees is that counterfactuals involving macro-variable antecedents assume some back-tracking before the time of the antecedent, and that a small micro-state change to set up the antecedent is more plausible than a sudden macro-change which involves breaks across multiple micro-states.
And on your other point, simple conditioning P(B | A) gives results more like the indicative conditional ("If Oswald did not shoot Kennedy, then someone else did") rather than the counterfactual conditional ("If Oswald had not shot Kennedy, then no-one else would have") .
Replies from: None↑ comment by [deleted] · 2012-03-17T20:47:56.606Z · LW(p) · GW(p)
Okay. So according to Causality (first edition, cause I'm poor), Theorem 7.1.7, the algorithm for calculating the counterfactual P( (Y= y)_(X = x) | e) -- which represents the statement "If X were x, then Y would be y, given evidence e" -- has three stages:
Abduction; use the probability distribution P(x, y| E = e).
Action; perform do(X = x).
Calculate p(Y = y) relative to the new graph model and its updated joint probability distribution.
In our specific case, we want to calculate P (X = 0_(Z = 1)). There's no evidence to condition on, so abduction does nothing.
To perform do(Z = 1), we delete every arrow pointing from the Yi's to Z. The new probability distribution, p(x, yi | do(Z = 1)) is given by p(x, yi, 1) when z = 1 and zero otherwise. Since the original probability distribution assigned probability one only to the state (x = 0, yi = 0, z = 0), the new probability distribution is uniformly zero.
I now no longer follow your calculation of P(X=0_(Z=1)). In particular:
My understanding of Pearl's semantics is that to evaluate this we have to intervene i.e. do(Z = 1) and this is equivalent to the multi-point intervention do(Y1 = 1 & Y2 = 1 & ... & Y10^30 = 1). This is achieved by replacing every structural equation between X and Yi by the static equation Y_i = 1.
The intervention do(Z = 1) does not manipulate the Yi. The formula I used to calculate p(X = 0 | do(Z = 1)) is the truncated factorization formula given in section 3.2.3.
I suddenly wish I had sat down and calculated this out first, rather than argue from principles. I hear my mother's voice in the background telling me to "do the math," as is her habit.
Replies from: drnickbone↑ comment by drnickbone · 2012-03-17T21:02:29.648Z · LW(p) · GW(p)
You missed the point here that Z is a "macro-state" variable, which is defined to be the average of the Yi variables.
It is not actually a separate variable on the causal graph, and it is not caused by the Yi variables. This means that the intervention do(Z = 1) can only be realised on the causal graph by do(Y1 = 1, Y2 = 1, ..., Y_10^30 = 1) which was what I stated a few posts ago. You are correct that the abduction step is not needed as this is a deterministic example.
Replies from: None↑ comment by [deleted] · 2012-03-17T21:18:31.014Z · LW(p) · GW(p)
Then why is P( X = 1 | do(Yi = 1) ) = 1? If I delete from the graph every arrow entering each Yi, I'm left with a graph empty of edges; the new joint pdf is still uniformly zero.
Replies from: drnickbone↑ comment by drnickbone · 2012-03-17T23:02:32.047Z · LW(p) · GW(p)
In Pearl's calculus, it isn't!
If you look back at my above posts, I deduce that in Pearl's calculus we will get P[X = 0 | do (Z = 1)] = P[X = 0 | do(Yi = 1 for all i)] = 1. We agree here with what Pearl's calculus says.
The problem is that the counterfactual interpretation of this is "If the average value of the Yi were 1, then X would have been 0". And that seems plain implausible as a counterfactual. The much more plausible counterfactual backtracks to change X, allowing all the Yi to change to 1 through a single change in the causal graph, namely "If the average value of the Yi were 1, then X would have been 1".
Notice the analogy to the Gore counterfactual. If Gore were president on 9/11, he wouldn't suddenly have become president (the equivalent of a mass deletion of all the causal links to the Yi). No, he would have been president since January, because of a micro-change the previous Fall (equivalent to a backtracked change to the X). I believe you agreed that the Gore counterfactual needs to backtrack to make sense, so you agree with backtracking in principle? In that case, you should disagree with the Pearl treatment of counterfactuals, since they never backtrack (they can't).
Replies from: None↑ comment by [deleted] · 2012-03-17T23:56:50.138Z · LW(p) · GW(p)
If you look back at my above posts, I deduce that in Pearl's calculus we will get P[X = 0 | do (Z = 1)] = P[X = 0 | do(Yi = 1 for all i)] = 1. We agree here with what Pearl's calculus says.
No, we disagree. My calculations suggest that P[X = 0 | do(Yi = 1 for all i)] = P[X = 1 | do(Yi = 1 for all i)] = 0. The intervention falls outside the region where the original joint pdf has positive mass. The intervention do(X = 1) also annihilates the original joint pdf, because there is no region of positive mass in which X = 1.
I still don't understand why you don't think the problem is a language problem. Pearl's counterfactuals have a specific meaning, so of course they don't mean something else from what they mean, even if the other meaning is a more plausible interpretation of the counterfactual (again, whatever that means -- I'm still not sure what "more plausible" is supposed to mean theoretically).
The problem is that the counterfactual interpretation of this is "If the average value of the Yi were 1, then X would have been 0". And that seems plain implausible as a counterfactual. The much more plausible counterfactual backtracks to change X, allowing all the Yi to change to 1 through a single change in the causal graph, namely "If the average value of the Yi were 1, then X would have been 1".
I think the problem is that when you intervene to make something impossible happen, the resulting system no longer makes sense.
I believe you agreed that the Gore counterfactual needs to backtrack to make sense, so you agree with backtracking in principle?
Yes. (I assume you mean "If Gore was president during 9/11, he wouldn't have invaded Iraq.")
In that case, you should disagree with the Pearl treatment of counterfactuals, since they never backtrack (they can't).
Why should I disagree with Pearl's treatment of counterfactuals that don't backtrack?
Isn't the decision of whether or not a given counterfactual backtracks in its most "natural" interpretation largely a linguistic problem?
Replies from: drnickbone↑ comment by drnickbone · 2012-03-19T23:36:36.433Z · LW(p) · GW(p)
No, we disagree. My calculations suggest that P[X = 0 | do(Yi = 1 for all i)] = P[X = 1 | do(Yi = 1 for all i)] = 0. The >intervention falls outside the region where the original joint pdf has positive mass. The intervention do(X = 1) also >annihilates the original joint pdf, because there is no region of positive mass in which X = 1.
I don't think that's correct. My understanding of the intervention do(Yi = 1 for all i)] is that it creates a disconnected graph, in which the Yi all have the value 1 (as stipulated by the intervention) but X retains its original mass function P[X = 0] = 1. The causal links from X to the Yi are severed by the intervention, so it doesn't matter that the intervention has zero probability in the original graph, since the intervention creates a new graph. (Interventions into deterministic systems often will have zero probability in the original system, though not in the intervened one.) On the other hand, you claim to be following Pearl_2012 whereas I've been reading Pearl_2001 and there might have been some differences in his treatment of impossible interventions... I'll check this out.
For now, just suppose the original distribution over X was P[X = 0] = 1 - epsilon and P[X = 1] = epsilon for a very small epsilon. Would you agree that the intervention do(Yi = 1 for all i) now is in the area of positive mass function, but still doesn't change the distribution over X so we still have P[X = 0 | do(Yi = 1 for all i)] = 1 - epsilon and P[X = 1 | do(Yi = 1 for all i)] = epsilon?
Isn't the decision of whether or not a given counterfactual backtracks in its most "natural" interpretation largely a >linguistic problem?
I still think it's a conceptual analysis problem rather than a linguistic problem. However perhaps we should play the taboo game on "linguistic" and "conceptual" since it seems we mean different things by them (and possibly what you mean by "linguistic" is close to what I mean by "conceptual" at least where we are talking about concepts expressed in English).
Thanks anyway.
Replies from: None↑ comment by [deleted] · 2012-03-20T00:23:44.805Z · LW(p) · GW(p)
You seem to be done, so I won't belabor things further; I just want to point out that I didn't claim to have a more updated copy of Pearl (in fact, I said the opposite). I doubt there's been any change to his algorithm.
All this ASCII math is confusing the heck out of me, anyway.
EDIT: Oh, dear. I see how horribly wrong I was now. The version of the formula I was looking at said "(formula) for (un-intervened variables) consistent with (intervention), and zero otherwise" and because it was a deterministic system my mind conflated the two kinds of consistency. I'm really sorry to have blown a lot of your free time on my own incompetence.
Replies from: drnickbone, drnickbone↑ comment by drnickbone · 2012-03-20T17:53:45.362Z · LW(p) · GW(p)
Thanks for that.... You just saved me a few hours additional research on Pearl to find out whether I'd got it wrong (and misapplied the calculus for interventions that are impossible in the original system)!
Incidentally, I'm quite a fan of Pearl's work, and think there should be ways to adjust the calculus to allow reasonable backtracking counterfactuals as well as forward-tracking ones (i.e. ways to find a minimal intervention further back in the graph, one which then makes the antecedent come out true..) But that's probably worth a separate post, and I'm not ready for it yet.
↑ comment by drnickbone · 2012-03-20T17:55:03.815Z · LW(p) · GW(p)
Thanks for that.... You just saved me a few hours additional research on Pearl to find out whether I'd got it wrong (and misapplied the calculus for interventions that are impossible in the original system)!
Incidentally, I'm quite a fan of Pearl's work, and think there should be ways to adjust the calculus to allow reasonable backtracking counterfactuals as well as forward-tracking ones (i.e. ways to find a minimal intervention further back in the graph, one which then makes the antecedent come out true..) But that's probably worth a separate post, and I'm not ready for it yet.
comment by aspera · 2013-10-03T06:42:45.948Z · LW(p) · GW(p)
As usual, I'm late to the discussion.
The probability that a counterfactual is true should be handled with the same probabilistic machinery we always use. Once the set of prior information is defined, it can be computed as usual with Bayes. The confusing point seems to be that the prior information is contrary to what actually occurred, but there's no reason this should be different than any other case with limited prior information.
For example, suppose I drop a glass above a marble floor. Define:
sh = “my glass shattered”
f = “the glass fell to the floor under the influence of gravity”
and define sh_0 and f_0 as the negations of these statements. We wish to compute
P(sh_0|f_0,Q) = P(sh_0|Q)P(f_0|sh_0,Q)/P(f_0|Q),
where Q is all other prior information, including my understanding of physics. As long as these terms exist, we have no problem. The confusion seems to stem from the assumption that P(f_0|sh_0,Q) = P(f_0|Q) = 0, since f_0 is contrary to our observations, and in this case seemingly mutually exclusive with Q.
But probability is in the mind. From the perspective of an observer at the moment the glass is dropped, P(f_0|Q) at least includes cases in which she is living in the Matrix, or aliens have harnessed the glass in a tractor beam. Both of these cases hold finite probability consistent with Q. From the perspective of someone remembering the observed event, P(f_0|Q) might include cases in which her memory is not trustworthy.
In the usual colloquial case, we’re taking the perspective of someone running a thought experiment on a histroical event with limited information about history and physics. The glass-dropping case limits the possible cases covered by P(f_0|Q) considerably, but the Kennedy-assassination case leaves a good many of them open. All terms are well defined in Bayes’ rule above, and I see no problem with computing in principle the probability of the counterfactual being true.
comment by [deleted] · 2014-06-13T08:15:10.271Z · LW(p) · GW(p)
Who cares if they are true, unless by true you mean useful - having predictive validity. In that case, the answer to are there examples of counterfactual thinking being useful is yes.
"We continue to use counterfactual thoughts to change our future behavior in a way that is more positive, or behavior intention. This can involve immediately making a change in our behavior immediately after the negative event occurred. By actively making a behavioral change, we are completely avoiding the problem again in the future. An example, is forgetting about Mother’s Day, and immediately writing the date on the calendar for the following year, as to definitely avoid the problem.[15] Goal-directed activity
In the same sense as behavior intention, people tend to use counterfactual thinking in goal-directed activity. Past studies have shown that counterfactuals serve a preparative function on both individual and group level. When people fail to achieve their goals, counterfactual thinking will be activated (e.g., studying more after a disappointing grade;[14]). When they engage in upward counterfactual thinking, people are able to imagine alternatives with better positive outcomes. The outcome seems worse when compared to positive alternative outcomes. This realization motivates them to take positive action in order to meet their goal in the future.[16][17]
Markman et al. (1993) identified the repeatability of an event as an important factor in determining what function will be used. For events that happen repeatedly (e.g., sport games) there is an increased motivation to imagine alternative antecedents in order to prepare for a better future outcome. For one-time events, however, the opportunity to improve future performance does not exist, so it is more likely that the person will try to alleviate disappointment by imagining how things could have been worse. The direction of the counterfactual statement is also indicative of which function may be used. Upward counterfactuals have a greater preparative function and focus on future improvement, while downward counterfactuals are used as a coping mechanism in an affective function (1993). Furthermore, additive counterfactuals have shown greater potential to induce behavioral intentions of improving performance.[14] Hence, counterfactual thinking motivates individuals into making goal-oriented actions in order to attain their (failed) goal in the future. Collective action
Main article: Collective action
On the other hand, at a group level, counterfactual thinking can lead to collective action. According to Milesi and Catellani (2011), political activists exhibit group commitment and are more likely to re-engage in collective action following a collective defeat and show when they are engage in counterfactual thinking. Unlike the cognitive processes involved at individual level, abstract counterfactuals lead to an increase in group identification, which is positively correlated with collective action intention. The increase in group identification impacts on people’s affect. Abstract counterfactuals also lead to an increase in group efficacy. Increase in group efficacy translates to belief that the group has the ability to change outcomes in situations. This in turn motivates group members to make group-based actions to attain their goal in the future.[16][18] Benefits and consequences
When thinking of downward counterfactual thinking, or ways that the situation could have turned out worse, people tend to feel a sense of relief. For example, if after getting into a car accident somebody thinks “At least I wasn’t speeding, then my car would have been totaled.” This allows for the positives of the situation to be accounted for, rather than the negatives. In the case of upward counterfactual thinking, people tend to feel more guilty or negatively about the situation. When thinking in this manner, people are focusing on ways that the situation could have turned out more positively. For example, “If only I had studied more, then I wouldn’t have failed my test.” This kind of thinking results in feeling guilty and have a lower sense of self-esteem. “[14]""