Circular Counterfactuals "Only that which Happens is Possible"
post by JohnBuridan · 2022-03-23T14:40:36.342Z · LW · GW · 15 commentsContents
Set 1: Counterfactuals as a Causal Claim Like Clockwork… Metaphysics without Chaos Application to Newcomb’s Atmospheric Molecules: Set 2: Counterfactuals as Claims About Unobserved Causal Chains Final Question and Aside for Future Consideration: Conclusion None 15 comments
“Only that which happens is possible.” -The Megarian School
You make a prediction, it doesn’t happen. Could it have?
Let’s ignore the cognitive bias literature here, and Tetlock’s wonderful data about how many forecasters claim that they were very, very close to being correct, if it weren’t for that one little thing that got in their way. Instead, let’s focus on the logic and metaphysics. Could it have happened? Immediately we are thrust into the world of counterfactual reasoning. What is the status of the counterfactual? Do they exist as an ontological primitive or are they something we discover/invent as a heuristic for thinking? The ramifications are pretty far reaching; for counterfactuals are a central tool for the way we think about causality. In other words, because many scientific fields employ counterfactual reasoning and search for the causes of phenomena, an account of counterfactuals which supplies a firm foundation should allow us to build clearer and cleaner models for interpreting the world, and in the case of AI, models which interpret the world.
While Tetlock’s is interested in what habits of thought are required to be skilled at making true counterfactual statements. And others on LW are interested in formalisms and engineering those formalisms to see what comes out. In this paper, I am interested in what metaphysical presuppositions are possible. The purpose here is to see how counterfactuals fit into epistemology, and how different metaphysical assumptions change our understanding of what counterfactuals are. (By metaphysical assumptions, I mean assumptions about the nature of causality and our capacity to identify causal connections).
I started down investigations of this sort in 2017, because I was worried about the ontological status of two common heuristic tools in behavioral science and LW: the distinction between the inside/outside view and the formation of priors. The distinction between the inside/outside view seemed to me subjective, and priors were (at the time) assumed to be almost arbitrary, as in it did not matter what priors one started out with so long as they were subject to updating new evidence. This also seemed wrong to me. I was interested in where good priors come from, the developmental process that leads to “priors hungry for updating”. In 2019 I worked on an Adversarial Collaboration which never came to fruition on the nature of counterfactuals. We wrote about 44 pages of preliminary text before my coauthor left the project.
Since then, (at the behest of a bounty) [LW · GW] I have refined a few more thoughts on counterfactuals, but I do not think I have found the perfect coherent account. One reason for this, is that while reasoning about counterfactuals, I feel forced to switch between different systems for thinking about causality. This creates two systems for thinking about counterfactuals, neither of which I think are exhaustive, nor do they perfectly lead into one another. Perhaps someone else will be able to fully work out my two sets into a complete system. But for now, I offer my best account.
Set 1: Counterfactuals as a Causal Claim
Causality is the central pin around which counterfactuals rotate. So perhaps a short investigation of counterfactuals and causality will demonstrate whether the relationship between counterfactuals and causality is circular. And that’s where my first set of propositions come from. They are essentially equivalent to Judea Pearl's views.
The argument works as such. Within any model we may assume that:
- Counterfactuals are specific type of conditional prediction.
- Conditional predictions are specific type of causal claim, that is, one in which two items are linked by a correlation.
- Causal claims are conditional predictions.
- Thus, any counterfactual claim can be restated as a conditional prediction.
And finally, - There is a feedback loop in acts of observation that allow us to create more conditional predictions.
Consider the prediction inherent in this sentence from Ned Hall: “If I had not ducked, the boulder would have knocked my skull.” The prediction is that the boulder would have hit the hiker’s head. The prediction, right or wrong, contains a causal claim – the claim that the forces acting on the falling boulder were such that it would collide with the hiker’s skull. But here is the tricky part! The causal claim contains a counterfactual.
The boulder will hit location X, if nothing changes. [Conditional Prediction]
The boulder would have hit X, if the hiker did not move X. [Counterfactual]
I think this demonstrates a circular, i.e. tautological, relationship between some types of causal claims and counterfactual claims.
But wait a second! I said above, the “prediction, right or wrong, contains a causal claim.” Could the prediction have been right? If the prediction were right, something would have been different. But what would have been different? A dizzying number of things could have been different in the causal history of the universe which would have resulted in the hiker being hit by a rock. But that’s only if some causal relationships could have been different.
And so, notice what I smuggled in: the idea of the hiker not moving. But my counterfactual could have been based upon any number of counterfactual claims of different kinds. “If the hiker did not see it” puts the counterfactual on the hiker’s functions and agency. “If the boulder did not hit that ridge,” places the counterfactual on the physical structure of the mountain. “If the hiker was not trained in Brazilian Jujitsu” places the counterfactual on some historical event in the hiker’s past. But there is no principled limit.
Like Clockwork… Metaphysics without Chaos
Let’s stick with a simple metaphysics. A standard 19th century one is that everything has a cause, probabilities are in the observation of phenomena, not in the phenomena themselves. Therefore, if we could account for all phenomena at time t then we could account for all phenomena at time t – 1 and t + 1.
Even if we take the 19th century physicalist metaphysics as strictly true, then counterfactuals still have no independent existence outside of the causal account. The counterfactual claim is useful because it allows us to bracket off the universe. The causal claim and the counterfactual claim are not made simultaneously.
But notice we are talking about causal and counterfactual claims in language.
We can go deeper into this line of inquiry and make a stronger claim about counterfactuals. Counterfactuals are deeper than the mere result of observations in an attempt to formulate more accurate causal claims. The formulation of a causal claim implies a deeper type of counterfactual. Consider the following example taken more-or-less from Kant.
I am sitting on the beach on a bright day and my exoskeleton warms up. This perception registers in my brain as a relationship between “the sun” and warming up. And as stated earlier, such a perception implies a counterfactual, in this case a testable one, that if I were to remove myself from the “sun” I would cease heating up. But perhaps at some point I am able to make a causal claim. “The sun causes heat through its light.” An understanding of the term “sun” will imply a counterfactual statement that is different in type from the first one. That statement is one of logical necessity: If it doesn’t emit light, it’s not the sun.
This formulation of counterfactual claims and causal claims as logically contemporaneous implies that understanding causality is equivalent to understanding counterfactuals.
So, what we have here is the idea that if counterfactuals are taken to be a mode of conditional prediction, and we can also take them as a logically equivalent transformation of a causal claim.
I wanted to try to apply this reasoning to Newcomb’s Problem in two ways. In the first way, I remove all agents to demonstrate counterfactuals and causality. In the second instance, I apply the logic to the boring old Newcomb’s problem. This I think will demonstrate that counterfactual formation determines our understanding of Newcomb’s Problem.
Application to Newcomb’s Atmospheric Molecules:
A sexless oxygen atom might bond with either O^2 or CO. If it bonds with O^2, it gets a thousand dollars. If it bonds with CO it gets a million dollars. If placed equidistant from the two compounds, it bonds with CO every time. It never bonds with both for the biggest possible reward. But oxygen doesn’t care about money, it just bonds with whatever allows it to fill its orbital shells most easily, which happens to be CO. There is no tricking out the laws of chemistry to make the atom become CO3.
Either O + CO -> CO2, or O + O2 -> Ozone
This is our prediction about the two possible outcome states.
Our observation is that O + CO -> CO2. Yay! 1 million dollars for the Oxygen atom!
So then we can start making counterfactual claims of which there is no limit. “It did not create O3 because reasons.” “It did not create CO3, because reasons.” This rather contrived and silly example of a counterfactual is to point out that nothing within the notion of a counterfactual relies upon intelligence. Counterfactuals do not merely reflect causal claims, but rather are an endless opportunity for new causal conjectures, most of which are nonsence. If a counterfactual is about the world, then it must be circular with reference to predictions and observations.
The General Form of Principal-Agent games begin with a small causal claim about a causal chain and build it build observations into predictions and counterfactuals for a more robust model of behavior.
Could your prediction have gone right? Yes, but only if the predictions and observations of the agents had been different. This is not about any decision algorithm per se, but the results of an algorithm must have the possibility of being different in order for the prediction to have gone right. In the case of molecules in a vacuum, there is no difference possible.
When we define counterfactuals with reference to causal claims, counterfactuals are circular, in that an account of counterfactuals requires using counterfactual reasoning, because all accounts require counterfactual reasoning. In the same way, all accounts of causality make use of causality.
But what if counterfactuals are not about causal claims? Many of them don’t seem to be, do they? In fact, many counterfactual claims are made without a second thought as to whether the counterfactual is possible or whether the event in question was “overdetermined,” which brings us out beyond the original hypothesis to a new set of questions.
+++
Set 2: Counterfactuals as Claims About Unobserved Causal Chains
Counterfactuals require making a claim about causal chains. So it is still a prediction, but a strong version of one, an algorithm dependent prediction. This type of counterfactual is a claim about a causal chain that didn’t occur or, perhaps even, can’t be observed.
So, the argument for the truth of the unobserved causal chain must be based on an analogy to some observed causal chain. The analogy will only work if the observed causal chain is isomorphic to the unobserved causal chain. Therefore, counterfactuals are claims about the isomorphism of an unobserved causal chain.
Consider the Viking explorations of the America’s. Could the Vikings have established a medieval empire in the Americas? In all historical What If scenarios, the argument will rest on a presumed geometric similarity between the causal forces in the scenario that did and the one that didn’t occur.
However, sometimes we can control the environment and test counterfactual claims. In those cases, we can check the isomorphism. When this occurs a counterfactual is ultimately not a claim about what is possible, but what happens. Only that which happens is possible, the rest is inferential conjecture.
General form: Counterfactuals are Claims about Unobserved Causal Chains meaning that they imply an entire network of causal-relations claims. However, since everything has a cause, a mistaken counterfactual is not ‘almost’ correct.
In this second account, I did not need to make a claim about unobserved causal chains in order to explain counterfactuals. That is because in this case a counterfactual is special type of causal claim and not a type I need to make in order to explain what a counterfactual is.
The application is thus: Could your prediction have gone right? No. The prediction was always going to be wrong because the predictor’s implicit claim about causal chains was incorrect.
So here in this paper I have pointed out a distinction between two different types of counterfactuals. The first type is essentially a related form of any causal claim, and the second type is set of unobserved causal chains.
Here are two examples of each.
- If not for its heat, the sun would be cold.
- If not for Thomas Schelling, there would be no book The Strategy of Conflict 1960.
- If the Athenians had stuck with Pericles’ strategy, they would have won the Peloponnesian War.
- If the Federal Reserve doesn’t raise interest rates, there will be endemic inflation.
I intentionally chose statements which look very similar but in fact are quite different.
The first two statements are based off a causal observation of the terms’ uses.
The second two statements are based off a causal conjecture of unstated relationships between the terms. That is an implicit understanding of a possible causal connection.
One could take response that the distinction is one of degree and not of kind in two directions. The Humean objection is that the terms are conjoined by the mind, not by anything outside of it. But that would leave us in the odd situation in which we commit ourselves to the idea that no relationships are causal. I think most people here would reject that.
Alternatively, we could say that all causal claims are probabilistic and that the more steps there are between the terms and the more difficult to prove the causal chain become. However, this conflates certain types of causality. To soften the dilemma, I just offer the very normal caution to pay attention to the class types one is dealing with.
Stars and heat are logically necessarily connected. Thomas Schelling and “author of The Strategy of Conflict” is an identity relationship.
A particular strategy and an outcome of a war is a claim about the relationship among a complex set of interacting factors. And the same goes for the Federal Reserve, inflation, and interest rates. (Complexity introduces dynamic and nonlinear relationships, stepwise functions, and thus the possibility for different types of causality, than a mere monist view of causes).
Final Question and Aside for Future Consideration:
I haven’t had the time to flesh this one out. But let me give one motivating example and leave it at that. Within one domain of knowledge Orion knows K and Perpetua knows L. Lorien knows K U L, and nothing more. Since Orion doesn’t know L, how can Orion figure out what he needs to learn in order to know K U L and nothing more? Perpetua can’t help him because she doesn’t know K. Only Lorien is capable of taking the Knowledge Space and constructing a Learning Space, which goes from K to K U L with no gaps and no excess. To teach Orion step y, Lorien is going to have to create a causal diagram that goes from the original state to the desired state. The causal diagram will be reverse engineered back to Orion’s current knowledge state. Each step will require a correct counterfactual claim. But how does Lorien check her claims about the learning space are correct without testing them on Orion? Is it possible? How close can she get?
Conclusion
TL;DR I have pointed to two types of counterfactual claims: ones that are based upon an explicit causal model, and ones based upon an implicit causal model. Counterfactuals based upon an explicit causal model are circular with respect to causality because they are merely alternate forms of a conditional prediction based on the model.
On the other hand, many counterfactual claims are based upon implicit causal models and serve as a guide for framing the problem. I have been calling them causal conjectures of unstated relationships. But one could also call them implicit causal claims, or claims of possible isomorphism, or all else being equal, or mutatis mutandis counterfactuals. The basic idea is that in these counterfactuals one is making a claim that x could change and all else remain unchanged creating an isomorphism between our world and the counterfactual world. These are the interesting type of counterfactuals, but they are also ones in which mathematical chaos is more likely, and thus the counterfactual claim itself is more likely to be nonsense.
15 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2022-03-24T01:10:31.450Z · LW(p) · GW(p)
Since counterfactuals are in the map, not in the territory, the only way to evaluate their accuracy is by explicitly constructing a probability distribution of possible outcomes in your model, checking the accuracy of the model and pointing out the probability of a given counterfactual in it. This approach exposes trivial and nonsensical statements as well, since they fail to map to a probability distribution of possible outcomes. Not everything that sounds meaningful as an English sentence is.
For example
"If not for its heat, the sun would be cold" -- how do you unpack this even? Is it a tautology of the type "all objects that are not hot are cold"? Maybe we could steelman it a bit: "if we didn't feel the heat of the sun, it would be cold" -- this is less trivial, since there are stellar objects that are hot but don't emit much heat, like white dwarfs, or so hot they emit mostly in the X-ray or gamma spectrum, which aren't really considered "heat". The latter can be modeled: looked into the distribution of stars and their luminosities and spectra, see which one would count as a "sun", then evaluate the fraction of stellar objects that do not emit heat and are also cold.
"If not for Thomas Schelling, there would be no book The Strategy of Conflict 1960." What does it mean? How do you steelman it so it is not a trivial claim "Thomas Schelling wrote the book The Strategy of Conflict 1960"? In a distribution of possible worlds in your model of the relevant parts of reality, what is the probability distribution of a book like that, but possibly with a different title, written by someone like Schelling, but with a different name?
"If the Athenians had stuck with Pericles’ strategy, they would have won the Peloponnesian War." -- this one is a more straightforward counterfactual, construct a probability distribution of possible outcomes of the war in case the Pericles' strategy was followed, given what you know about the war, the strategy, the unknowns, etc.
"If the Federal Reserve doesn’t raise interest rates, there will be endemic inflation." -- this is not a counterfactual at all, but a straightforward prediction, though the rules are the same: construct the probability distribution of possible outcomes first.
Replies from: JohnBuridan↑ comment by JohnBuridan · 2022-03-24T20:02:32.560Z · LW(p) · GW(p)
Since counterfactuals are in the map, not in the territory, the only way to evaluate their accuracy is by explicitly constructing a probability distribution of possible outcomes in your model...
"The only way", clearly not all counterfactuals rely on an explicit probability distribution. The probability distribution is usually non-existent in the mind. We rarely make them explicitly. Implicitly, they are probably not represented in the mind as probability distributions either. (Rule 4: neuroscience claims are false.) I agree that it is a neat approach in that may expose trivial or nonsensical counterfactuals. But that approach only works the consequent is trivial or nonsensical. If the antecedent is trivial or nonsensical, the approach requires a regress.
"If not for Thomas Schelling, there would be no book The Strategy of Conflict 1960." What does it mean? How do you steelman it so it is not a trivial claim "Thomas Schelling wrote the book The Strategy of Conflict 1960"?
My point is that higher level categories are necessary, and yet have to come from somewhere. I am not taking the author-work relationship as self-evident.
Replies from: shminux↑ comment by Shmi (shminux) · 2022-03-25T05:35:17.329Z · LW(p) · GW(p)
Right, definitely not "the only way". Still I think most counterfactuals are implicit, not explicit, probability distributions. Sort of like when you shoot a hoop, your mind solves rather complicated differential equations implicitly, not explicitly.
The probability distribution is usually non-existent in the mind.
I don't know if they are represented in the mind somewhere implicitly, but my guess would be that yes, somewhere in your brain there is a collection of experiences that get converted into "priors", for example. If 90% of your relevant experiences say that "this proposition is true" and 10% say that "this proposition is false", you end up with a prior credence of 90% seemingly pulled out of thin air.
comment by tailcalled · 2022-03-23T17:32:57.867Z · LW(p) · GW(p)
While Tetlock’s is interested in what habits of thought are required to be skilled at making true counterfactual statements.
That doesn't seem true to me? He's interested in what is required to be skilled at making accurate predictions, but predictions are rung 1 (non-causal/statistical) while counterfactuals are rung 3 on Pearl's causal ladder.
The argument works as such. Within any model we may assume that:
- Counterfactuals are specific type of conditional prediction.
- Conditional predictions are specific type of causal claim, that is, one in which two items are linked by a correlation.
This seems incorrect; due to confounders, you can have nontrivial conditionals despite trivial counterfactuals, and due to suppressors (and confounders too for that matter), you can have nontrivial counterfactuals despite trivial conditionals.
Replies from: JohnBuridan↑ comment by JohnBuridan · 2022-03-24T20:17:14.665Z · LW(p) · GW(p)
Some of Tetlock's most recent research has used videogames for studying counterfactuals. I participated in both the Civ 5 and Recon Chess rounds. He is interested in prediction, but sees it as a form of counterfactual too. I don't quite see the tension with Pearl's framework, which is merely a manner of categorization in terms of logical priority, no?
This seems incorrect; due to confounders, you can have nontrivial conditionals despite trivial counterfactuals, and due to suppressors (and confounders too for that matter), you can have nontrivial counterfactuals despite trivial conditionals.
I think you might be right? Let me make sure I am understanding you correctly.
In A -> B, If A is trivial, B can still be nontrivial, and vise versa? This makes sense to me.
Replies from: tailcalled↑ comment by tailcalled · 2022-03-24T20:37:15.395Z · LW(p) · GW(p)
I think you might be right? Let me make sure I am understanding you correctly.
In A -> B, If A is trivial, B can still be nontrivial, and vise versa? This makes sense to me.
No, I mean stuff like:
Whether you have long hair does not causally influence whether you can give birth; if someone grew their hair out, they would not become able to give birth. So the counterfactuals for hair length on ability to give birth are trivial, because they always yield the same result.
However, you can use hair length to conditionally predict whether someone can give birth, since sex influences both hair length and ability to give birth. So the conditionals are nontrivial because the predicted ability to give birth depends on hair length.
comment by Pattern · 2022-03-23T16:05:05.090Z · LW(p) · GW(p)
Tl:dr, read Are there other counterfactuals? and Causal claims that are explicit enough it is clear they don't fit into this taxonomy.
Contents:
Are there other counterfactuals?
Errors?
Unobserved factors (even factors we have no way of observing)
The difficulty of a full specification
Causal claims that are explicit enough it is clear they don't fit into this taxonomy
Are there other counterfactuals?
1. Does quantum physics allow for 'possibilities which do not happen'? (I recognize this is a partially circular question.)
2. Are there other types of counterfactuals (than classical, quantum, model)?
Errors?
sexless oxygen atom
A what?
ounterfactuals do not merely reflect causal claims, but rather are an endless opportunity for new causal conjectures, most of which are nonsnece. If a counterfactual is about the world, then it must be circular with reference to predictions and observations.
emphasis is my own
nonsense
Unobserved factors (even factors we have no way of observing)
Could your prediction have gone right? Yes, but only if the predictions and observations of the agents had been different.
Predictions not made with full info might:
- ignore unobserved factors*
- guess their values/positions/existence (or not)
*Do you think the cancer cells in this Petri dish will survive? Yes. [Pulls out gun, shoots the petri dish.] Looks like you were wrong.**
**And here we arrive at claims not separate from the world that might be 'always false' - that result can be forced via action. By attaching the action to prediction things can be made 'always right' or 'always wrong' as desired. (Such an attempt can fail - the well treated cells in the Petri dish might still die. But if they probably don't, then this move works.)
The difficulty of a full specification
Only that which happens is possible, the rest is inferential conjecture.
With a simple deterministic environment, then if you fully specified it (variable X has this value...), then if the environment actually takes those values, then the hypothesis is actually 'tested' (or observed) to be true or false.
However, environments aren't simple. What values held such that a Viking empire didn't get established? (Assuming it didn't.)
Consider the Viking explorations of the America’s. Could the Vikings have established a medieval empire in the Americas? In all historical What If scenarios, the argument will rest on a presumed geometric similarity between the causal forces in the scenario that did and the one that didn’t occur.
Causal claims that are explicit enough it is clear they don't fit into this taxonomy
The claims about counterfactuals, or causal claims, only allows for one predictions. 'I flip a coin, it may come up heads or tails after ending up flat.' Here I am a) making two causal claims and saying one is correct, or b) being explicit about my model not being complete, and leaving open the outcome as contingent upon unobserved, or not yet fixed, factors. (This may make more sense to do if you are flipping the coin.)
comment by Chris_Leong · 2022-03-27T06:06:36.432Z · LW(p) · GW(p)
A few thoughts:
- You wrote "Counterfactuals are a specific type of conditional prediction". I'm not really a fan of this phrasing. What happens is that we change the world model then make a prediction. This isn't the same as conditioning on an observation where the world model is left intact (which is how EDT tries to generate counterfactuals)
- You describe the sun producing heat as a logically necessary connection. I disagree, at most it's a linguistic convention, rather than a matter of logical necessity.
- "Conditional predictions are a specific type of causal claim, that is, one in which two items are linked by a correlation" - I'm confused by this statement as correlation != causation
- "Since Orion doesn’t know __L__, how can Orion figure out what he needs to learn in order to know __K __U __L __and nothing more?" - I'm confused. Doesn't Orion already know more than this in that they can narrow it down to the K region which is a subset of the union. Could you clarify what union means here?
↑ comment by JohnBuridan · 2022-04-01T21:45:50.559Z · LW(p) · GW(p)
- I agree that the world model is changing. I should have specified that what makes a counterfactual a specific type is that it conditions upon a change in the world state. Thus later, when I wrote, "The causal claim contains a counterfactual," I could improve the clarity by pointing out that the causal claim contains a 'world state variable' which when altered creates a counterfactual world.
- It is certainly more than a linguistic convention. When one applies the concept of causality to their experience and knowledge about the big bright glowing orb and to their knowledge and experience of heat, eventually the referents of the two terms "sun" and "heat" are shown to have necessary connection, such that in any stable, univocal use the terms there is a logical connection. I probably should have stuck to my draft version of the sun-heat paragraph, which, inspired by a section from Kant, was also significantly longer.
- Yes, correlation is not causation, but when two things are correlated, there is either no relation or some relation. But when I make a conditional prediction, I assume that the antecedent and the consequent have some sort of causal connection either directly, or through a backdoor, or through colliders. Now I realize, conditional predictions don't actually require the assumption that there is a link. Yet, I am going to assume people are trying to make meaningful speech acts, and this context, a conditional prediction implies that there is a correlation, that is in some perhaps unknown way indicative of causal forces.
- You make sense to say that if Orion knows that he only knows K, then he actually knows more than K by virtue of knowing K at the limit. I disagree with this. He knows K only. He does not know that Perpetua knows the union of K and L. Nor does he know that Perpetua knows only K and L and no more. Nor does he know that K and L exist as a union. I think Perpetua should be able to identify that Orion knows K and only K, and create a path for him to learn K U L. But I am skeptical there is any way for Orion to figure out the path from his knowledge state of K to her knowledge state of K U L.
comment by TAG · 2022-03-24T22:29:46.496Z · LW(p) · GW(p)
If placed equidistant from the two compounds, it bonds with CO every time.
You are portraying a molecule-level interaction as strictly deterministic. Thats misleading, in a way, although it also makes the point that everything about counterfactuals is ultimately about physics, not decision theory...what decision you actually make does not float free of physics.
This is not about any decision algorithm per se, but the results of an algorithm must have the possibility of being different in order for the prediction to have gone right. In the case of molecules in a vacuum, there is no difference possible
Now you make it even more explicit that you are assuming physical determinism, and at the molecular level too.
Determinism straightforwardly implies that there are no real , not-in-the-mind counterfactuals, because it implies that everything happens with an objective probability of 1.000, so the things that didn't happen had no probability of happening
Indeterminism can easily be expressed in terms of probability: its the claim that an event B, and the conditional on an event A, can have an objective probability of less than 1.0. With the corollary that further alternatives, C and, and D can also happen since they have probability greater than 0.0. Given the premise, such alternatives have to exist, since the overall probability has to add up to one. The existence of such unactualised possibilities is equivalent to the existence of real counterfactuals: so indeterminism and counterfactual realism are equivalent to each other.
So the Megarian claim ...
“Only that which happens is possible.”
... is conditional on the truth of determinism. It's an implication of determinism; it's not true per se.
You make a prediction, it doesn’t happen. Could it have?
If you want to know whether it really could , you need to know whether there is real indeterminism, and whether the prediction was one if the allowable possibilities. You can't figure it out from decision theory
Replies from: JohnBuridan↑ comment by JohnBuridan · 2022-03-25T02:59:14.513Z · LW(p) · GW(p)
Exactly right. As I said above, "let's presume a simple metaphysics," i.e. determinism. It's not only that if physical determinism were true, counterfactuals would only exist in the mind, but also that counterfactuals those counterfactuals made in the mind, can only work as heuristic for gleaning new information from the environment, if you assume determinism with regard to some things but not others.
This model interesting because we must construct a probabilistic universe in our minds even if we in fact inhabit a deterministic one.
Replies from: TAGcomment by TAG · 2022-03-23T18:51:23.875Z · LW(p) · GW(p)
So, the argument for the truth of the unobserved causal chain must be based on an analogy to some observed causal chain. The analogy will only work if the observed causal chain is isomorphic to the unobserved causal chain. Therefore, counterfactuals are claims about the isomorphism of an unobserved causal chain.
Exact isomorphism isn't useful. The only things that's exactly isomorphic to Viking colonisation of the US is Viking colonisation of the US.
Usual disclaimer: there is no problem of counterfactuals in mainstream philosophy (only a problem of why rationalists see a problem). The following is a mainstream account not a novel theory:-
Any causal interaction can be understood as a token of a type. For instance, an apple falling from a tree is a token of the type "massive objects are subject to gravitational forces". The type of a causal interaction defines a set of conditionals , and ... this is the important part....not all of the conditionals can occur in a given token. The apple hits the ground first, or it hits Isaac Newton's head first -- it can't do both. So a real counter factual is, very straightforwardly, a conditional outcome that didn't occur in that particular token. But it's still possible to make inferences about the non-actual conditionals , the counterfactuals of a causal token because they are inferred from the structure of the type.
Replies from: JohnBuridan↑ comment by JohnBuridan · 2022-03-24T20:29:52.400Z · LW(p) · GW(p)
Are types also tokens of types? And can we not and do we not have counterfactuals of types?
I'm not a type-theory expert, but I was under the impression that adopting it as explanation for counterfactuals precommits one to a variety of other notions in the philosophy of mathematics?
Replies from: TAG↑ comment by TAG · 2022-03-24T22:19:59.491Z · LW(p) · GW(p)
Are types also tokens of types?
Maybe. But what implications does that have? What does it prove or disprove?
Edit:
We tend to think of things as evolving from a starting state, or "input", according to a set of rules laws . Both need to be specified to determine the end state or output as much a it can be determined. When considering counterfactuals , we tend to imagine variations in the starting state, not the rules of evolution (physical laws). Since if you want to take it to a meta level, you could consider counterfactuals based on the laws being different.
But why?
I’m not a type-theory expert, but I was under the impression that adopting it as explanation for counterfactuals precommits one to a variety of other notions in the philosophy of mathematics?
-
I wasn't referring to the type/token distinction in a specifically mathematical sense...it's much broader than that.
-
Everyone's commited to some sort of type/token distinction anyway. It's not like you suddenly have to by into some weird occult idea that only a few people take seriously. In particular, it's difficult to bring able to give an account of causal interaction s without physical laws ...and it's difficult to give an account of physical laws without a type/token distinction. (Nonetheless, rationalists don't seem to have an account of physical laws).