What is it like to be a compatibilist?
post by tslarm · 2023-05-05T02:56:45.084Z · LW · GW · 9 commentsThis is a question post.
Contents
Answers 9 JBlack 6 clone of saturn 5 Richard_Kennaway 4 Seth Herd 4 Ape in the coat 3 pjeby 3 Viliam 1 the gears to ascension -2 TAG None 9 comments
I'd like to better understand how compatibilists conceive of free will.[1] LW is a known hotbed of compatibilism, so here's my question:
Suppose that determinism is true. When I face a binary choice,[2] there are two relevantly-different states of the world I could be in:[3]
State A: Past events HA have happened, current state of the world is A, I will choose CA, future FA will happen.
State B: Past events HB have happened, current state of the world is B, I will choose CB, future FB will happen.
When I make my choice (CA or CB), I'm choosing/revealing which of those two states of the world are (my) reality. They're package deals: CA follows from HA just as surely as it leads to FA, and the same holds for state B.
Which seems to give me just as much control[4] over the past as I have over the future. In whatever sense I 'exercise free will' to make CA real and bring about FA, I also make it the case that HA is the true history.
My question is: Does this bother you at all, and if not, why not?[5]
- ^
Yes, I've done my own reading, though admittedly it's been a while. I never found a satisfying (to me) answer to this question, and to the best of my recollection I rarely saw it clearly addressed in a form I recognised. If you want to link me to a pre-existing answer, please do, but please be specific: less 'read Dennett' and more 'read this passage of this work'.
- ^
Maybe no real choice is truly binary, but for the sake of simplicity let's say this one is. I don't think that changes anything important.
- ^
For simplicity I'm taking the physical laws as a given. I don't think that matters unless free will involves in some sense choosing which set of physical laws holds in reality.
- ^
Not necessarily in every sense in which you might want to use the word 'control'; you might define that word such that it only applies to causal influence forward in time. But yes in the sense that whatever I can do to make my world the one with FA in it, I can do to make my world the one with HA in it.
- ^
Answers
It does not bother me at all, since it doesn't actually address any of the factors that are relevant to my compatibilist position on free will.
The first part to understand is that I see the term "free will" as having a whole range of different shades of meaning. Most of these involve questions of corrigibility, adaptability, predictability, moral responsibility, and so on. Many of these shades of meaning are related to each other. Most of them are compatible with determinism, which is why I would describe my position as mostly compatibilist.
The description given in this post doesn't appear to be related to any of these, but with mere physical correlation in a toy universe simplified beyond the point of recognizability or relevance. Further questions would need to be answered in order to even begin to consider whether the agent in this post's question has "free will" in any of the relevant senses. For example:
- To what extent does the agent know the relation between the H's, C's and F's?
- Would the deciding agent perceive HA and HB as being identical up the the point of decision?
- Is it the same agent making the decision in universes HA and HB?
- What basis for judgement is used for the preceding answer?
In a fairly "central" example, my expectation would be:
- The agent does not know these relations;
- That the agent does perceive HA and HB as being identical;
- That in most important respects the agents are considered to be "the same", by some sort of criterion such as:
- They themselves would recognize each other's memories, personalities, and past decisions as being essentially "their own". (They may diverge in future)
In this case I would say that this agent (singular, due to the third answer) has free will in most important respects (mostly due to answer 2 but also somewhat due to 1), can be said to choose CA or CB, influences FA or FB but does not choose them, and likewise does not choose HA or HB.
If you have different answers to those questions, my answers and the reasons behind them may change.
↑ comment by tslarm · 2023-05-05T08:58:17.211Z · LW(p) · GW(p)
Thanks. One clarifying question: When you say that the agent "can be said to choose CA or CB, influences FA or FB but does not choose them, and likewise does not choose HA or HB", do you mean that they influence but do not choose HA or HB, or that they neither influence nor choose HA or HB? (My guess is the latter, because you would restrict 'influence' to forward-in-time causation, but I want to make sure I'm not misunderstanding.)
I think the reason my little scenario seems irrelevant to you is related to disagreement over this:
I see the term "free will" as having a whole range of different shades of meaning. Most of these involve questions of corrigibility, adaptability, predictability, moral responsibility, and so on.
I think the pre-theoretic concept of free will has implications for those sorts of questions, but I don't think most of them they are part of what it means. I think what most people are trying to point at when they talk about free will is something along the lines of 'ability to do otherwise' in the sense that, when looking at a choice in retrospect, we would say a person 'had the ability to do otherwise' than they actually did. So to me your version seems like a redefinition of the original concept, rather than a meaning-preserving addition of rigor.
Replies from: JBlack↑ comment by JBlack · 2023-05-06T01:22:02.987Z · LW(p) · GW(p)
I would say that they neither choose nor influence HA and HB, assuming that the universe in question follows some sort of temporal-causal model. Non-causal universes or those in which causality does not follow a temporal ordering are much more annoying to deal with and most people don't have them in mind when talking about free will, so I wouldn't include them in exploration of a more 'central' meaning. However, there is some literature in which the concept of free will in universes with other types of determinism is discussed.
I distinguish between "influence" and "choice" since answer 1 posited that the relationship between the various parts of the universe wasn't known to the agent. The agent does not know that future Fx follows choice Cx nor that Cx follows from past Hx, and by answer 2 does not even know the difference between HA and HB. If FA includes some particular outcome OA that causally follows from CA but isn't in FB, and the agent choosing CA does not know that, then I would not say that the agent chose OA. They chose CA, which influenced OA.
There are lots of different ways to address different forms of "ability to do otherwise", each of which is useful and relevant to different questions about free will, and so they all lead to different shades of meaning for "free will" even including nothing more than what you've just said. However, different people communicate different explicit and implicit assumptions about what "free will" means in their communication, and so necessarily mean somewhat different things by the term. Each of the aspects I mentioned in my post come from multiple respected writers on the subject of free will.
So no, it's not a redefinition. It's a recognition that the meaning of the term in practice varies with person and context, and that it doesn't so much have a single meaning as a collection of related meanings. From long experience, proposing a much more specific definition is one of the surest ways to end up squabbling pointlessly over semantics. This is one of the major failure modes of discussions of free will, and where possible I prefer to start from a point of recognizing that it is a broad term, not a narrow one.
It doesn't bother me, because I'm me, with the propensity to make the choices I'm determined to make. If I had chosen otherwise, I would not be me.
Suppose I love chocolate ice cream and hate vanilla ice cream. When I choose to eat chocolate ice cream, it's an expression of the fact that I prefer chocolate ice cream. I have free will in the sense that if I preferred vanilla instead, I could have chosen vanilla, but in fact I prefer chocolate so I won't choose vanilla.
I believe that Eliezer's analysis of "free will" answers your question. Free will (he says) is neither outside of an otherwise lawful universe, nor incompatible with a lawful universe, nor merely compatible with a lawful universe, but requires a lawful universe. He dubs this position "requiredism" [LW · GW].
I find this not merely convincing, but obviously right. What do you think?
↑ comment by Vladimir_Nesov · 2023-05-06T14:09:44.726Z · LW(p) · GW(p)
This is somewhat dated in the sense that LW-style decision theory later converged on treating agents-that-make-decisions as abstract algorithms rather than as their instances embedded in the world, see discussion of "algorithm" axis of classifying decision theories in this post [LW · GW].
Replies from: Richard_Kennaway, TAG↑ comment by Richard_Kennaway · 2023-05-06T14:57:14.611Z · LW(p) · GW(p)
With TAG, I don't see what their decision theory has to do with the matter. Whatever their decision theory, it is impotent to achieve anything unless their physical instances embedded in the world are able to physically act in the world to achieve their aims, which is the thing that incompatibilists deny.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-05-06T15:08:50.711Z · LW(p) · GW(p)
The point is about the frame of Yudkowsky's explanation, where "you are physics" instead of "you are an algorithm". The latter seems convergently more useful for decision theory of embedded agents, which can be predicted by other agents or can have multiple copies. So this doesn't concern some prior meaning of "free will", it motivates caring about a notion of free will that has to do with abstract computations of agent's decisions rather than agent instances embedded in the world.
Replies from: Richard_Kennaway, TAG↑ comment by Richard_Kennaway · 2023-05-06T15:55:59.071Z · LW(p) · GW(p)
You are an algorithm embedded in physics. You are not any of the other people executing this algorithm, you are this one. Conducting yourself according to these decision theories still causes the physical actions only of this one, and is only acausally connected to the others of which these theories speak. Deciding as if deciding for all is different from causally deciding for all.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-05-06T16:25:45.771Z · LW(p) · GW(p)
You are an algorithm embedded in physics. You are not any of the other people executing this algorithm, you are this one.
There is an algorithm and the person executing the algorithm, different entities. Being the algorithm, you are not the person executing it. The algorithm is channeled by the person instantiating it concretely (in full detail) as well as other people who might be channeling approximations to it, for example only getting to know that the algorithm's computation satisfies some specification instead of knowing everything that goes on in its computation.
Conducting yourself according to these decision theories still causes the physical actions only of this one
The use of "you are the algorithm" frame is noticing that other instances and predictors of the same algorithm have the same claim to consequences of its behaviors, there is no preferred instance. The actions of the other instances and of the predictors, if they take place in the world, are equally "physical" as those of the putative "primary" instance.
only acausally connected to the others of which these theories speak
As an algorithm, you are acausally connected to all instances inclusing the "primary" instance in the same sense, by their reasoning about you-the-algorithm.
Deciding as if deciding for all is different from causally deciding for all.
I don't know what "causally deciding" means for algorithms. Deciding as if deciding for all is actually an interesting detail, it's possible to consider its variants where you are only deciding for some, and that stipulation creates different decision problems depending on the collection of instances that are to be controlled by a given decision (a subset of all instances that could be controlled). This can be used to set up coalitions of interventions that the algorithm coordinates. The algorithm instances that are left out of such decision problems are left with no guidance, which is analogous to them failing to compute the specification (predict/compute/prove an action-relevant property of algorithm's behavior), a normal occurrence. It also illustrates that the instances should be ready to pick up the slack when the algorithm becomes unobservable.
↑ comment by TAG · 2023-05-06T14:39:52.232Z · LW(p) · GW(p)
Requiredism holds that determinism is an advantage to fee will because the connection between a decision and the resulting action is deterministic. Randomness, or at least, too much randomness in the wrong place, would prevent me from acting reliably on my decisions Of course, determinism also removes the elbow room, the ability to have decided differently, that is of such concern to libertarians. Determinism is only an overall advantage to free will if elbow room is unimportant or impossible, so Requiredism needs compatibilism as a starting point.
I don't think it's obvious that libertarian style elbow room, or CHDO, is unimportant or impossible, so I don't think requiredism is obvious,.
It's also pretty clear that 100% determinism doesn't imply 100% reliability. If you reach to pick up a cup, and knock it over, that could be a determined event .. and of course, such errors are fairly common.Such considerations leads to an argument against reliablism: reliable action only needs a good enough level of determinism, because we don't put decisions into practice with 100% success, and a good enough level of determinism is compatible with a small degree of indeterminism which could allow indeterministic free will.
↑ comment by tslarm · 2023-05-06T08:56:54.410Z · LW(p) · GW(p)
If 'lawful' just means 'not completely random' then I agree. But I've never been convinced that there's no conceivable third option beside 'random' and 'deterministic'. Setting aside whether there's a non-negligible chance that it's true, do you think the idea that consciousness plays some mysterious-to-us causal role -- one that isn't accurately captured by our concepts of randomness or determinism -- is definitely incoherent?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2023-05-06T14:50:44.097Z · LW(p) · GW(p)
Consciousness does play a mysterious-to-us-today causal role. It is mysterious, in that no-one has yet explained how there can possibly be such a thing as subjective experience, yet there it is. Perhaps someone might explain it in the future, but no-one has done so today. And it must be causal, not epiphenomenal, because the doctrine of epiphenomenalism just adds another layer of mysteriousness on top of that one, explaining the obscure by the more obscure. Epiphenomenalism is no more coherent an idea than p-zombies.
Randomness vs. determinism is a red herring. The universe has to be lawful, for us to be able to direct it into desired configurations. Randomness, such as some current theories of quantum mechanics say are physically unreducible to determinism, is an obstacle to doing that, but has no more significance than that. That goes for chaos as well, which some put forward as a "third alternative" to randomness and determinism. But none of these matter for this view of what "free will" is.
I recommend that people do click through to the article by Eliezer that I linked before [LW · GW], if they haven't already. It's not very long, and any précis I could write would just be a repetition of it. Epiphenomenalism, btw, is described by the first diagram in that article.
Replies from: tslarm↑ comment by tslarm · 2023-05-06T15:35:32.452Z · LW(p) · GW(p)
And it must be causal, not epiphenomenal, because the doctrine of epiphenomenalism just adds another layer of mysteriousness on top of that one, explaining the obscure by the more obscure.
I don't follow this. Adding another layer of mysteriousness might not make for a satisfying explanation, but why must it be false? (I also think the p-zombie is a perfectly coherent idea, for whatever that's worth.)
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2023-05-06T15:49:13.891Z · LW(p) · GW(p)
When I say "must" I'm rounding to zero probabilities so negligible that they should not even come to my attention. Epiphenomenalism has consciousness be a real thing (that is what it is a theory of) but which has only a one-way connection to the rest of the universe, like a redundant gear in a clock that is not part of the train that drives the hands. Nowhere else do we see such a thing; in fact, by definition, we could not. The hypothesis is doing no work.
And I see p-zombies as another incoherent idea.
Replies from: tslarm↑ comment by tslarm · 2023-05-06T16:04:56.808Z · LW(p) · GW(p)
Nowhere else do we see such a thing; in fact, by definition, we could not.
I think the second clause implies that our not seeing it anywhere else provides no evidence. (Just for the obvious Bayesian reason.)
The hypothesis is doing no work.
I'm not sure why it has to. The 'consciousness is real' part isn't a hypothesis; it's the one thing we can safely take as an axiom. And the 'consciousness doesn't affect anything else' part is as reasonable a candidate for the null hypothesis as any other, as far as I can tell. Where does your prior against redundant gears come from?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2023-05-06T16:13:17.688Z · LW(p) · GW(p)
What would legitimately draw the hypothesis to our attention? One of the things that we have experience of is being able to act in the world. Epiphenomenalism says that we do not act in the world, we are merely passengers without the power to so much as twitch our little fingers. This is so plainly absurd that only a philosopher could take it seriously, but as Cicero remarked more than two thousand years ago, no statement is too absurd for some philosophers to make.
Replies from: tslarm, tslarm↑ comment by tslarm · 2023-05-06T16:51:20.847Z · LW(p) · GW(p)
What would legitimately draw the hypothesis to our attention?
The fact that subjective experience exists and we haven't been able to figure out any causal role that it plays, other than that which seems to be explicable by ordinary physics (and with reference only to its ordinary physical correlates).
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2023-05-06T17:52:51.153Z · LW(p) · GW(p)
We have also not figured out how the physical brain does the things that we do.
Replies from: tslarm↑ comment by tslarm · 2023-05-06T17:54:48.974Z · LW(p) · GW(p)
Epiphenomenalism says that we do not act in the world, we are merely passengers without the power to so much as twitch our little fingers. This is so plainly absurd that only a philosopher could take it seriously
I've been trying to articulate why I find it hard to reconcile this with your endorsing Eliezer's requiredism, and this is the best I can do:
I don't think I see a meaningful difference between epiphenomenalism (i.e. brain causes qualia, qualia don't cause anything) and a non-eliminative materialism that says 'qualia and brain are not separate things; there's just matter, and sometimes matter has feelings, but the matter that has feelings still obeys ordinary physical laws'. In both cases, qualia are real but physics is causally closed and there's no room for libertarian free will.
If the quoted passage referred to that kind of materialism rather than to epiphenomenalism, it would be an argument for libertarianism. And I know that's not what you intended, but I don't fully understand what you do mean by it, given that it must not conflict with requiredism (which is basically 'compatibilism but more so').
Replies from: TAG, Richard_Kennaway↑ comment by TAG · 2023-05-06T18:20:25.408Z · LW(p) · GW(p)
In both cases, qualia are real but physics is causally closed and there’s no room for libertarian free will.
Libertarian FW isn't ruled out by the causal closure of the physical, it's ruled out by determinism (physical or not). Causal closure would rule out something like interactionist dualism, but that's fairly orthogonal to LFW...it could even be deterministic.
↑ comment by Richard_Kennaway · 2023-05-06T17:57:05.297Z · LW(p) · GW(p)
I don't see a difference between that argument and saying that a jumbo jet doesn't cause anything, only its atoms do.
Replies from: tslarm↑ comment by tslarm · 2023-05-06T18:03:16.323Z · LW(p) · GW(p)
I don't see a difference between that argument and saying that a jumbo jet doesn't cause anything, only its atoms do.
Happy to leave this here if you've had enough, but if you do want a response I'll need more than that to go on. I've been struggling to understand how your position fits together, and that doesn't really help. (I'm not even sure exactly what you're referring to as 'that argument'. Admittedly I am tired; I'll take a break now.)
I think what you may be seeing on LW is a reluctance to use the term "free will". I hope it is, since I think it's a terribly confusing term. I don't think "free will" is a coherent concept in an intuitive definition of the phrase. What would such a thing mean, and would you want what you've defined?
I think what people are usually thinking of as "free will" is better called self-determination; the ability to determine one's own future according to one's preferences. (This might include changing one's preferences, if one prefers to do that when finding certain types of new evidence.) This is the only type of "free will" I've ever thought or heard of that's worth wanting (see Dennett's book of the same name).
If we assume that I know about HA or HB, my choice of FA or FB is self-determination. If HA and HB is the person I'm dealing with having stolen money in the past, and FA and FB are me choosing to do business with them or not, I want my beliefs about how to treat people to be the determining cause of my actions.
I'd say that I do have control of the future, because my brain, and specifically the parts that implement my beliefs about ethics and game theory, is what links the past HA to the future FA, just as I prefer to see such states linked.
I wouldn't say this is necessarily a compatibilist position; it's more of a position of "Are you sure you know what you mean by free will? You say it like it's something worth wanting, but I can't see how it would be if it's not compatible with determinism".
LIke most philosophical questions, it boils down to defining the question. If you say exactly what you mean by free will, you'll have your answer.
Or at least an approximate answer, with details to be filled in by empirical observations. I actually disagree with Dennett that we have "all of the free will worth wanting". I think our cognitive biases prevent us from acting based on our beliefs an awful lot of the time. I'd say we have something like 50% of the self-determination worth wanting.
No, I don't think it bothers me and I'm not sure why it should.
When I'm making a choice CA I indeed reveal that I'm in a universe where I'm choosing CA, and HA that lead to this, had happened.
If I lived in a universe with an omniscient God who knew my every choice, then when I make a choice, I determine the knowledge of such God.
Maybe I'm missing something. Could you explain why it bothers you?
↑ comment by tslarm · 2023-05-05T15:51:12.673Z · LW(p) · GW(p)
From the responses I'm getting, I think I failed to communicate anything that doesn't quickly boil down to the usual crux(es) between compatibilists and incompatibilists. But to try to answer your question:
I think 'free will' in its usual sense requires some capacity to influence the future via choice-making. I thought that one of the standard compatibilist positions was that we do influence the future via our choices; both may be fully determined by initial conditions and physical laws, but when the chain of causation between past state X and future state Y runs through my brain in the relevant ways, it's still my choice and it's free in the sense that matters. And I didn't think most compatibilists would assert that we can influence the past.
But given determinism, the only objective difference between my capacity to influence the past and my capacity to influence the future seems to be the direction of the causal chain linking my choice to the other events: in one case it goes backward in time, in the other case forward. If there are no potential branching points in either direction, and my choice merely reveals which fixed causal chain is the real one, this difference in direction doesn't seem to me to bear the weight of distinguishing between my 'ability to influence the future' and my 'inability to influence the past'. So I was interested in how compatibilists thought about this distinction.
Replies from: Vladimir_Nesov, Ape in the coat↑ comment by Vladimir_Nesov · 2023-05-06T14:03:46.258Z · LW(p) · GW(p)
There is no substantial difference between controlling the future and controlling the past [LW · GW], just fewer opportunities for controlling the past, since that requires predictors of your decisions located in the past. If they are not already there, you can't place them there from the future without controlling the past.
↑ comment by Ape in the coat · 2023-05-05T16:23:45.202Z · LW(p) · GW(p)
If there are no potential branching points in either direction, and my choice merely reveals which fixed causal chain is the real one, this difference in direction doesn't seem to me to bear the weight of distinguishing between my 'ability to influence the future' and my 'inability to influence the past'.
Why not? For me it seems exactly enough.
When we have a model of only logical connections between events, where HA, CA and FA are connected we can't distinguish between affecting the past and affecting the future. But if we add the knowledge of the direction of causality from HA to CA to FA, then we immediately can. Now it's clear that it's HA influencing CA influencing FA and not vice versa.
Of course, in your mind, you can still feel as if you choose your past to be HA while making choice CA. But this is a map-territory confusion. Making the choice CA reveals to you that you have the past HA, but the past HA has already been there whether you know about it or not. Notice, that the same can't be said about the future FA, because it's not there yet.
Replies from: TAG, tslarm↑ comment by TAG · 2023-05-05T19:28:06.323Z · LW(p) · GW(p)
What's the direction of causality? If there is a single inevitable future, then the future is symmetric to the past.
Replies from: Ape in the coat↑ comment by Ape in the coat · 2023-05-07T06:33:03.076Z · LW(p) · GW(p)
If there weren't a one directional vector of causality then it would be symmetrical. But as there is - it's not.
Replies from: TAG↑ comment by tslarm · 2023-05-05T16:52:23.205Z · LW(p) · GW(p)
Thanks for explaining. I don't have an answer to 'why not?' that will satisfy you; ultimately it'll just be another way of saying that compatibilist free will doesn't match the concept of free will that I have and that I think people tend to have pre-theoretically. (And it's different enough that I see it as a redefinition rather than a refinement.)
Replies from: Ape in the coat↑ comment by Ape in the coat · 2023-05-05T17:23:49.888Z · LW(p) · GW(p)
Hmm. It seems to be fitting the requirement you previously wrote.
I think 'free will' in its usual sense requires some capacity to influence the future via choice-making.
But I understand that this may not be enough for some people, even though I struggle to understand libertarian free will as a coherent concept.
Could you explain then, why would branching feel enough for you?
Is it because, if there are branches of the future, that you can select between, while there are no branches of the past that you can select between, it means there is an important difference between the way you interact with the future and with the past while making a choice?
↑ comment by tslarm · 2023-05-06T04:14:24.758Z · LW(p) · GW(p)
Hmm. It seems to be fitting the requirement you previously wrote.
Yes -- I didn't mean to deny that a causal link between my choice and future events counts as 'some capacity to influence the future via choice-making'. But I also didn't mean to suggest that it was a sufficient condition for (my concept of) free will, in cases where the choice is fully determined.
Could you explain then, why would branching feel enough for you?
Is it because, if there are branches of the future, that you can select between, while there are no branches of the past that you can select between, it means there is an important difference between the way you interact with the future and with the past while making a choice?
To me, free will means something like 'ability to choose between different possible futures'. And if there's no forward-in-time branching, there's only one possible future. (I admit that 'branching' is very under-defined here, and so is 'possible' -- but I think you get what I'm gesturing at, even if you doubt that it could ever be fully specified in a coherent way without ending up as plain old randomness.) Whereas backward-in-time branching seems like it would cash out as 'different possible pasts lead to the same future', or in other words, some information loss as time progresses. So I wouldn't say that free will requires forward in time branching and the absence of backward-in-time branching.
Replies from: Ape in the coat↑ comment by Ape in the coat · 2023-05-07T06:29:40.162Z · LW(p) · GW(p)
Thank you for your answer.
Do you feel that without possible futures it's not actually a choice? Like, imagine a piece of rock. There are can be events E1, E2, E3 that happen to it at different moments of time. Being part of the causal universe, the rock partually causes these events. But it doesn't choose anything. Is it similar to how you feel abut human choice under determenism?
As an intuition pump, imagine also a rock in a non-deterministic universe where either E3 or E3' happens after E2. And also imagine a human in a deterministic one. Would the indeterministic rock be more free than deterministic one? Would it be more free than a human in a deterministic universe? WHere does this extra freedom comes from?
To me, free will means something like 'ability to choose between different possible futures'. And if there's no forward-in-time branching, there's only one possible future. (I admit that 'branching' is very under-defined here, and so is 'possible'
There is this intuitive vague feeling that freedom of will has to do something with possibility of alternatives. People feel it, but do not have an actual model of how this all work together. And the thing is, this intuition is true. Just, as it happens, not the way people initially think it is.
Here is a neat compatibilist model, according to which you (and not a rock) have an ability to select between different outcomes in a deterministic universe and which explicitly specify what 'possible' mean: possibility is in the mind and so is the branching of futures. When you are executing your decision making algorithm you mark some outcomes as 'possible' and backpropagate from them to the current choice you are making. Thus, your mental map of the reality has branches of possible futures between which you are choosing. By design, the algorithm doesn't allow you to choose an outcome you deem impossible. If you already know for certain what you will choose, than you've already chosen. So the initial intuition is kind of true. You do need 'possible futures' to exist so that you can have free will: perform your decision making ability which separates you from the rock. But the possibility, and branching futures do not need to exist separately of you. They can just be part of your mind.
And when you truly understand it, the alternatives seem kind of ridiculous, to be honest. Why would parts of your decision making process exist outside of your mind? What does it even mean for possible futures to exist separately of the mind that is modelling them? The whole point of future is not there yet, so even the 'actual future' doesn't exist at the moment of decision making. And then there is the whole other layer of non-existence with the concept of 'physical possibility'.
Replies from: Vladimir_Nesov, tslarm↑ comment by Vladimir_Nesov · 2023-05-07T09:28:01.722Z · LW(p) · GW(p)
What does it even mean for possible futures to exist separately of the mind that is modelling them?
Not physically, but platonic objects that serve as semantics for formal syntax make sense, and only syntax straightforwardly exists in the mind, not semantics it admits. So these are the parts of decision making that exist outside of your mind, in the same sense as mathematical objects exist outside of a mathematician's mind.
Replies from: Ape in the coat↑ comment by Ape in the coat · 2023-05-07T14:15:50.654Z · LW(p) · GW(p)
Good point. I'm equalizing between logical existence and existence in one's mind in this post, but if we don't do that then indeed we can say that possible futures exist platonically just as mathematical objects.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-05-07T16:04:45.089Z · LW(p) · GW(p)
I'm equalizing between logical existence and existence in one's mind in this post
But then territory is in the mind? The distinction is mind's blindness to most of the details of the platonic objects it reasons about, thus they are separate existence only partially observed.
↑ comment by tslarm · 2023-05-07T07:51:50.976Z · LW(p) · GW(p)
I should clarify that I'm not arguing for libertarianism here, just trying to understand the appeal of (and sometimes arguing against) compatibilism.
(FWIW, I don't think libertarian free will is definitely incoherent or impossible, and combined with my incompatibilism that makes me in practice a libertarian-by-default: if I'm free to choose which stance to take, libertarianism is the correct one. Not that that helps much in resolving any of the difficult downstream questions, e.g. about when and to what extent people are morally responsible for their choices.)
Here is a neat compatibilist model, according to which you (and not a rock) have an ability to select between different outcomes in a deterministic universe and which explicitly specify what 'possible' mean: possibility is in the mind and so is the branching of futures. When you are executing your decision making algorithm you mark some outcomes as 'possible' and backpropagate from them to the current choice you are making. Thus, your mental map of the reality has branches of possible futures between which you are choosing. By design, the algorithm doesn't allow you to choose an outcome you deem impossible. If you already know for certain what you will choose, than you've already chosen. So the initial intuition is kind of true. You do need 'possible futures' to exist so that you can have free will: perform your decision making ability which separates you from the rock. But the possibility, and branching futures do not need to exist separately of you. They can just be part of your mind.
I'm sorry to give a repetitive response to a thoughtful comment, but my reaction to this is the predictable one: I don't think I'm failing to understand you, but what you're describing as free will is what I would describe as the illusion of free will.
Aside from the semantic question, I suspect a crux is that you are confident that libertarian free will is 'not even wrong', i.e. almost meaninglessly vague in its original form and incoherent if specified more precisely? So the only way to rescue the concept is to define free will in such a way that we only need to explain why we feel like we have the thing we vaguely gesture at when we talk about libertarian free will.
If so, I disagree: I admit that I don't have a good model of libertarian free will, but I haven't seen sufficient reason to completely rule it out. So I prefer to keep the phrase 'free will' for something that fits with my (and I think many other people's) instinctive libertarianism, rather than repurpose it for something else.
Replies from: Ape in the coat, None↑ comment by Ape in the coat · 2023-05-07T09:06:04.985Z · LW(p) · GW(p)
I should clarify that I'm not arguing for libertarianism here, just trying to understand the appeal of (and sometimes arguing against) compatibilism.
The major appeal of compatibilism for me is that there is an actual model, describing how freedom of will works, how it depends on the notion of possibility, allows to distinguish between entities that have free will and entities who do not and how it corresponds to the layman intuitions and usage of the term and adds up to normality while solving practical matters such as the questions of personal responsibility [LW · GW].
I've yet to see anything with similar level of clarity from any other perspective on the matter.
So the only way to rescue the concept is to define free will in such a way that we only need to explain why we feel like we have the thing we vaguely gesture at when we talk about libertarian free will.
I don't think that the explanation I've given you can be said to be just about the feeling of free will. It's part of it. But also it explains the actual decision making algorith, corresponding to these feelings. This algorith is executing in reality. And having this algorithm being executed on your brain gives new abilities compared to not having one (back to a person and a rock example). Neither this algorithm is just about your beliefs. At this moment calling it "an illusion" seems very semantically weird to me. Especially when there isn't a propper model of what non-illusion supposed to be.
Could you help me understand why your choice of definitions is like that? Do you call it illusion because the outcomes you deem possible are not meta-possible: only one will be the output of your decision making algorithm and so only one can really happen? But isn't it the same with indeterminism? Or is it because the possible futures in your mind do not correspond to something outside of it?
Replies from: tslarm, TAG↑ comment by tslarm · 2023-05-07T16:27:06.701Z · LW(p) · GW(p)
I agree that your model is clearer and probably more useful than any libertarian model I'm aware of (with the possible exception, when it comes to clarity, of some simple models that are technically libertarian but not very interesting).
Do you call it illusion because the outcomes you deem possible are not meta-possible: only one will be the output of your decision making algorithm and so only one can really happen?
Something like that. The SEP says "For most newcomers to the problem of free will, it will seem obvious that an action is up to an agent only if she had the freedom to do otherwise.", and basically I a) have not let go of that naive conception of free will, and b) reject the analyses of 'freedom to do otherwise' that are consistent with complete physical determinism.
I know it seems like the alternatives are worse; I remember getting excited about reading a bunch of Serious Philosophy about free will, only to find that the libertarian models that weren't completely mysterious were all like 'mostly determinism, but maybe some randomness happens inside the brain at a crucial moment, and then everything downstream of that counts as free will for some reason'.
But basically I think there's enough of a crack in our understanding of the world to allow for the possibility that either a) a brilliant theory of libertarian free will will emerge and receive some support from, or at least remain consistent with, developments in physics; or b) libertarian free will is real but just inherently baffling, like consciousness (qualia) or some of the impossible ontological questions.
Replies from: Ape in the coat↑ comment by Ape in the coat · 2023-05-16T08:25:01.808Z · LW(p) · GW(p)
Something like that.
But what's the difference between determinist and indeterminist universes here? In any case we have a decision making algorithm. In any case there will be only one actual output of it. The only difference I see is something that can be called "unpredictability in principle" or "desicion instability". If we run the exact same decision making algorithm again in the exact same context multiple times, in determenist universe we get the exact same output every time, while in indeterminist universe the outputs will differ. So it leads us to this completely unsatisfying perspective:
'mostly determinism, but maybe some randomness happens inside the brain at a crucial moment, and then everything downstream of that counts as free will for some reason'.
Notice also, that even if it's impossible to actually run the same decision making algorithm in the same context from inside this determinist universe, this will still not be satisfying for your intuition. Because what if someone outside of the universe is recreating a whole simulation of our universe in exact details and thus completely able to predict my desicions? It doesn't even matter if these beings outside of the universe with their simulation exist. It's just the principle of things.
And the thing is, the intition of requiring "desicion instability" isn't that obvious for the newcomer to the problem of free will. It's a specific and weird bullet to swallow. How do people arrive to this? I suspect that it goes something like that: When we imagine multiple exact replications of our decision making algorithm always comming to the same conclusion, it feels that we are not free to come to the other conclusion, thus our desicion making isn't free in the first place. I think this is a very subtle goalpost shift.
Originally we do not demand from the concept of freedom of will the ability to retroactively change our desicions. When you make a choice five minutes ago, you do not claim to not have free will unless you can timetravel back and make a different choice. We can not change the choice we've already made. But it doesn't mean that this choice wasn't free.
The situation with recreating you desicion making algorithm in exact same conditions as before is exactly that. You've already made the choice. And now you can't retroactively make it different. But this doesn't mean that this choice wasn't free in the first place.
But basically I think there's enough of a crack in our understanding of the world to allow for the possibility that either a) a brilliant theory of libertarian free will will emerge and receive some support from, or at least remain consistent with, developments in physics; or b) libertarian free will is real but just inherently baffling, like consciousness (qualia) or some of the impossible ontological questions.
I think there is a case for a "Generalised God of the Gaps" principle to be made here.
Replies from: TAG↑ comment by TAG · 2023-05-16T11:29:45.530Z · LW(p) · GW(p)
The only difference I see is something that can be called “unpredictability in principle” or “desicion instability”.
Note that there is no fact that decision-making actually is an algorithm: that's just an assumption rationalists favour.
Note that everyone subjectively experiences an amount of "decision instability" -- you might be unable to make a decision , or immediately regret a decision.
So the territory is much more in favour of decision instability than your favoured map.
a) a brilliant theory of libertarian free will will emerge and receive some support from, or at least remain consistent with, developments in physics;
Some libertarians already have mechanistic (up to indeterminism) theories, eg. Robert Kane.
↑ comment by TAG · 2023-05-07T12:22:53.351Z · LW(p) · GW(p)
The major appeal of compatibilism for me is that there is an actual model, describing how freedom of will works,
ie., it doesn't. Compatibilism has to manage expectations.
But also it explains the actual decision making algorith, corresponding to these feelings.
Libertarians can say that free agency is the execution of an algorithm, too. It's just that it would be an indeterministic algorithm.
(Incidentally, no one has put forward any reason that any algorithm should feel like anything).
Do you call it illusion because the outcomes you deem possible are not meta-possible: only one will be the output of your decision making algorithm and so only one can really happen? But isn’t it the same with indeterminism?
No. An indeterministic coin-flip has two really possible outcomes.
Replies from: Ape in the coat↑ comment by Ape in the coat · 2023-05-07T14:12:59.293Z · LW(p) · GW(p)
Libertarians can say that free agency is the execution of an algorithm, too. It's just that it would be an indeterministic algorithm.
Libertarian algorithmic explanation have to be quite different from compatibilist one. At least, it needs to account for the source of connection between possible futures in your mind and 'real' possible futures, the nature of this 'realness', has its own different way to reduce 'couldness' and 'possibility' to being, has a model of what happens to all the alternative future branches, how previously undeterminable events become determinated by actually happening in the present and how combination of determinable and indeterminable events produce free will. If you think these are answered questions, please make a separate post about it.
(Incidentally, no one has put forward any reason that any algorithm should feel like anything).
Not really relevant, but here is a reason for you. Feeling X is having a representation of X in your model of self. Some things are encoded to have representation in it and some are not, depending on whether this information is deemed important for central planning agent by evolution. Global desicion making is extremely important and maybe even the reason why central planning agent exists in the first place, so the steps of this algorithm are encoded in the model of the self.
No. An indeterministic coin-flip has two really possible outcomes.
Call them real as much as you want, it's still either head or tails, when you actually flipped the coin, not both.
ie., it doesn't.
Sigh. We've had multiple opportunities to discuss these issues before and sadly you haven't manage to explain anything about libertarianism to my satisfaction and kept talking pass me. Not sure whether it's more of your fault or mine but In any case I'd like to discuss these questions with someone who I have more hope of understanding my position and explaining theirs. So this is my last reply to you in this thread. I repeat my request to write your own post on the matter if you think you have something to say. Frankly, I find the fact that you write replies in a thread addressed to compatibilists, a bit gauche.
Replies from: TAG↑ comment by TAG · 2023-05-07T15:53:02.040Z · LW(p) · GW(p)
Libertarian algorithmic explanation have to be quite different from compatibilist one
Of course: they have to explain more.
At least, it needs to account for the source of connection between possible futures in your mind and ‘real’ possible futures,
Of course, but that's just a special case of accurate map-making, not some completely unique problem.
the nature of this ‘realness’
Determinism is a special case of indeterminism. Indeterminism is tautologically equivalent to real possibilities. Since determinism is a special case, it is more in need of defense than the special case.
I explained that in my PM of 1st July 2022, which you never replied to.
I previously said that determinism is just a special case of indeterminism where some of the transitions have probabilities less than 1.0. Likewise,a causal diagram is a special case of a probabilistic state transition diagram. .
http://www.yaldex.com/game-development/1592730043_ch44lev1sec4.html
If a causal diagram is a full explanation of determinism, a probabilistic state transition diagram is a full explanation of indeterminism.
What, in general is the problem? If you know what a word means , it is usually easy figure out what the opposite means. “Poor” means not-rich … so if you know what “rich” means without additional information or additional concepts. Why would “indeterminism” be an exception to rule?development/1592730043_ch44lev1sec4.html
If a causal diagram is a full explanation of determinism, a probabilistic state transition diagram is a full explanation of indeterminism.
What, in general is the problem? If you know what a word means , it is usually easy figure out what the opposite means. “Poor” means not-rich … so if you know what “rich” means without additional information or additional concepts. Why would “indeterminism” be an exception to rule?
how previously undeterminable events become determinated by actually happening in the present and how combination of determinable and indeterminable events produce free will. If you think these are answered questions, please make a separate post about it.
No libertarian makes the claim that undeterminable events become determined. Undetermined future events eventually happen..which does not make them causally determinist in retrospect. (Once they have happened, we can determine their values, but that is a different sense of "determine").
I have already explained that in my July 1st reply, quoting previous explanations I had already given.
It just seems that it’s the way the universe is. And while not a really satisfying answer it at least makes sense. If the universe allows causality—no surprise that we have causality. Compare to this: universe doesn’t allow some things to be determinable but we still somehow are able to determine them. - This seems as an obvious contradiction to me and is the reason I can’t grasp an understanding of indeterminism on a gut level
It’s not a contradiction because the two “determines” mean different things.
That’s conflating two meanings of “determined”. There’s an epistemic meaning by which you “determine” that something has happened, you gain positive or “determinate”.knowledge of it. And there’s causal determinism, the idea that a situation can only turn out out or evolve in one particular way. They are related , but not in such a way that you can infer causal determinism from epistemic determinism. You can have determinate knowledge of an indeterminate coin flip.
Feeling X is having a representation of X in your model of self
No one has put forward.a reason why having a representation of X should feel like anything.
Call them real as much as you want, it’s still either head or tails, when you actually flipped the coin, not both.
You are saying what...? That there cannot have been two possibilities, because there is only one actuality? But that there can be is the whole point of the word "possibility", even for in-the-mind possibilities.
We’ve had multiple opportunities to discuss these issues before and sadly you haven’t manage to explain anything about libertarianism to my satisfaction
You ignored my long message of July 1st. It's not that I am not trying to communicate.
↑ comment by [deleted] · 2023-05-07T13:34:16.847Z · LW(p) · GW(p)
Why do you think LFW is real? The only naturalistic frameworks that I've seen that support LFW are the ones that are like Penrose's Orch-OR, that postulate that 'decisions' are quantum (any process that is caused by the collapse of the quantum states of the brain). But it seems unlikely that the brain behaves as a coherent quantum state. If the brain is classical, decisions are macroscopic and they are determined, even in Copenhagen.
And in this sense, what you have is some inherent randomness within the decision-making algorithms of the brain, there's no special capability of the self to 'freely' choose while at the same time not being determined by their circumstances, there's just a truly-random factor in the decision-making process.
Replies from: tslarm↑ comment by tslarm · 2023-05-07T16:02:47.001Z · LW(p) · GW(p)
Why do you think LFW is real?
I'm not saying it's real -- just that I'm not convinced it's incoherent or impossible.
And in this sense, what you have is some inherent randomness within the decision-making algorithms of the brain
This might get me thrown into LW jail for posting under the influence of mysterianism, but:
I'm not convinced that there can't be a third option alongside ordinary physical determinism and mere randomness. There's a gaping hole in our (otherwise amazingly successful and seemingly on the way to being comprehensive) physical picture of reality: what the heck is subjective experience? From the objective, physical perspective there's no reason anything should be accompanied by feelings; but each of us knows from direct experience that at least some things are. To me, the Hard Problem is real but probably completely intractable. Likewise, there are some metaphysical questions that I think are irresolvably mysterious -- Why is there anything? Why this in particular? -- and they point to the fact that our existing concepts, and I suspect our brains, are inadequate to the full description or explanation of reality. This is of course not a good excuse for an anything-goes embrace of baseless speculation or wishful thinking; but the link between free will and consciousness, combined with the baffling mystery of consciousness (in the qualia sense), leaves me open to the possibility that free will is something weird and different from anything we currently understand and maybe even inexplicable.
It seems to me that your confusion is contending there are two past/present states (HA+A / HB+B) when in fact reality is simply H -> S -> C. There is one history, one state, and one choice that you will end up making. The idea that there is a HA and HB and so on is wrong, since that history H has already happened and produced state S.
Further, C is simply the output of your decision algorithm, which result we don't know until the algorithm is run. Your choice could perhaps be said to reveal something previously not known about H and S, but it doesn't distinguish between two histories or states, only your state of information about the single history/state that already existed. (It also doesn't determine anything about H and S that isn't "this decision algorithm outputs C under condition S".)
Indeed, even presenting it as if there is actually a CA and CB from which you will choose is itself inaccurate: you're already going to choose whatever you're going to choose, and that output is already determined even if you have yet to run the algorithm that will let you find out what that choice is. The future states CA and CB never actually exist either -- they are simulations you create in your mind as part of the decision algorithm.
Or to put it another way, since the future state C is a complex mix of your choice and other events taking place in the world, it will not actually match whatever simulated option you thought about. So the entire A/B disjunction throughout is about distinctions that only exist in your mental map, not in the territory outside your head.
So, the real world is H->S->C, and in your mind, you consider simulated or hypothetical A's and B's. Your decision process resolves which of A and B you feel accurately reflects H/S/C, but cannot affect anything but C. (And even there, the output was already determinable-in-principle before you started -- you just didn't know what the output was going to be.)
↑ comment by tslarm · 2023-05-07T07:25:17.579Z · LW(p) · GW(p)
It seems to me that your confusion is contending there are two past/present states (HA+A / HB+B) when in fact reality is simply H -> S -> C. There is one history, one state, and one choice that you will end up making. The idea that there is a HA and HB and so on is wrong, since that history H has already happened and produced state S.
I guess I invited this interpretation with the phrasing "there are two relevantly-different states of the world I could be in". But what I meant could be rephrased as "either the propositions 'HA happened, A is the current state, I will choose CA, FA will happen' are all true, or the propositions 'HB happened, B is the current state, I will choose CB, FB will happen' are all true; the ones that aren't all true are all false".
I'm not sure how much that rephrasing would change the rest of your answer, so I won't spend too much time trying to engage with it until you tell me, but broadly I'm not sure whether you are defending compatibilism or hard determinism. (From context I was expecting the former, but from the text itself I'm not so sure.)
Replies from: pjeby↑ comment by pjeby · 2023-05-07T10:24:31.501Z · LW(p) · GW(p)
I'm not sure how much that rephrasing would change the rest of your answer
Well, it makes the confusion more obvious, because now it's clearer that HA/A and HB/B are complete balderdash. This will be apparent if you try to unpack exactly what the difference between them is, other than your choice. (Specifically, the algorithm used to compute your choice.)
Let's say I give you a read-only SD card containing some data. You will insert this card into a device that will run some algorithm and output "A" or "B". The data on the card will not change as a result of the device's output, nor will the device's output retroactively cause different data to have been entered on the card! All that will be revealed is the device's interpretation of that data. To the extent there is any uncertainty about the entire process, it's simply that the device is a black box - we don't know what algorithm it uses to make the decision.
So, tl;dr: the choice you make does not reveal anything about the state or history of the world (SD card), except for the part that is your decision algorithm's implementation. If we draw a box around "the parts of your brain that are involved in this decision", then you could say that the output choice tells you something about the state and history of those parts of your brain. But even there, there's no backward causality -- it's again simply resolving your uncertainty about the box, not doing anything to the actual contents, except to the extent that running the decision procedure makes changes to the device's state.
broadly I'm not sure whether you are defending compatibilism or hard determinism
As other people have mentioned, rationalists don't typically think in those terms. There isn't actually any difference between those two ideas, and there's really nothing to "defend". As with a myriad other philosophical questions, the question itself is just map-territory confusion or a problem with word definitions.
Human brains have lots of places where it's easy to slip on logical levels and end up with things that feel like questions or paradoxes when in fact what's going on is really simple once you put back in the missing terms or expand the definitions properly. (This is often because brains don't tend to include themselves as part of reality, so this is where the missing definitions can usually be found!)
In the particular case you've presented, that tendency manifests in the part where no part of your problem specification explicitly calls out the brain or its decision procedures as components of the process. Once you include those missing pieces, it's straightforward to see that the only place where hypothetical alternative choices exist is in the decider's brain, and that no retrocausality is involved.
In the parts of reality that do not include your brain, they are already in some state and already have some history. When you make a decision, you already know what state and history exist for those parts of reality, at least to the extent that state and history is decision-relevant. What you don't know is which choice you will make.
You then can imagine CA and CB -- i.e., picture them in your brain -- as part of running your decision algorithm. Running this algorithm then makes changes to the history and state of your brain -- but not to any of the inputs that your brain took in.
Suppose I follow the following decision procedure:
- Make a list of alternatives
- Give them a score from 1-10 and sort the list
- Flip a coin
- If it comes up heads, choose the first item
- If it comes up tails, cross off that item and go back to step 3
None of these steps is retrocausal, in the sense of "revealing" or "choosing" anything about the past. As I perform these steps, I am altering H and S of my brain (and workspace) until a decision is arrived at. At no point is there an "A" or "B" here, except in the contents of the list.
Since there is a random element I don't even know what choice I will make, but the only thing that was "revealed" is my scoring and which way the coin flips went -- all of which happened as I went through the process. When I get to the "choice" part, it's the result of the steps that went before, not something that determines the steps.
This is just an example, of course, but it literally doesn't matter what your decision procedure is, because it's still not changing the original inputs of the process. Nothing is retroactively chosen or revealed. Instead, the world-state is being changed by the process of making the decision, in normal forward causality.
As soon as you fully expand your terms to any specific decision procedure, and include your brain as part of the definition of "history" and "state", the illusion of retrocausality vanishes.
A pair of timelines, showing two possible outcomes, with the decision procedure parenthesized:
- H -> S -> (HA -> SA) -> CA
- H -> S -> (HB -> SB) -> CB
The decision procedure operates on history H, state S as its initial input. During the process it will produce a new history and final state, following some path that will result in CA or CB. But CA and CB do not reveal or "choose" anything about the H or S that existed prior to beginning the decision procedure. Instead, the steps go forward in time creating HA or HB as they go along.
It's as if you said, "isn't it weird, how if I flip a coin and then go down street A or B accordingly, coming to whichever restaurant is on that street, that the cuisine of the restaurant I arrive at reveals which way my coin flip went?"
No. No. It's not weird at all! That's what you should expect to happen! The restaurant you arrived at does not determine the coin flip, the coin flip determines the restaurant.
As soon as you make the decision procedure a concrete procedure -- be it flipping a coin or otherwise -- it should hopefully become clear that the choice is the output of the steps taken; the steps taken are not retroactively caused by the output of the process.
The confusion in your original post is that you're not treating "choice" as a process with steps that produce an output, but rather as something mysterious that happens instantaneously while somehow being outside of reality. If you properly place "choice" as a series of events in normal spacetime, there is no paradox or retrocausality to be had. It's just normal things happening in the normal order.
LW compatibilism isn't believing that choice magically happens outside of spacetime while everything else happens deterministically, but rather including your decision procedure as part of "things happening deterministically".
Replies from: tslarm↑ comment by tslarm · 2023-05-07T15:42:38.276Z · LW(p) · GW(p)
This is hard to respond to, in part because I don't recognise my views in your descriptions of them, and most of what you wrote doesn't have a very obvious-to-me connection to what I wrote. I suspect you'll take this as further evidence of my confusion, but I think you must have misunderstood me.
The confusion in your original post is that you're not treating "choice" as a process with steps that produce an output, but rather as something mysterious that happens instantaneously while somehow being outside of reality.
No I'm not. But I don't know how to clarify this, because I don't understand why you think I am. I do think we can narrow down a 'moment of decision' if we want to, meaning e.g. the point in time where the agent becomes conscious of which action they will take, or when something that looks to us like a point of no return is reached. But obviously the decision process is a process, and I don't get why you think I don't understand or have failed to account for this.
LW compatibilism isn't believing that choice magically happens outside of spacetime while everything else happens deterministically, but rather including your decision procedure as part of "things happening deterministically".
I'm fully aware of that; as far as I know it's an accurate description of every version of compatibilism, not just 'LW compatibilism'.
retrocausal, in the sense of "revealing" or "choosing" anything about the past
How is 'revealing something about the past' retrocausal?
As other people have mentioned, rationalists don't typically think in those terms. There isn't actually any difference between those two ideas, and there's really nothing to "defend".
There is a difference: the meaning of the words 'free will', or in other words the content of the concept 'free will'. From one angle it's pure semantics, sure -- but it's not completely boring and pointless, because we're not in a situation where we all have the exact same set of concepts and are just arguing about which labels to apply to them.
the only place where hypothetical alternative choices exist is in the decider's brain
This and other passages make me think you're still interpreting me as saying that the two possible choices 'exist' in reality somewhere, as something other than ideas in brains. But I'm not. They exist in a) my description of two versions of reality that hypothetically (and mutually exclusively) could exist, and b) the thoughts of the chooser, to whom they feel like open possibilities until the choice process is complete. At the beginning of my scenario description I stipulated determinism, so what else could I mean?
Well, it makes the confusion more obvious, because now it's clearer that HA/A and HB/B are complete balderdash.
Even with the context of the rest of your comment, I don't understand what you mean by 'HA/A and HB/B are complete balderdash'. If there's something incoherent or contradictory about "either the propositions 'HA happened, A is the current state, I will choose CA, FA will happen' are all true, or the propositions 'HB happened, B is the current state, I will choose CB, FB will happen' are all true; the ones that aren't all true are all false", can you be specific about what it is? Or if the error is somewhere else in my little hypothetical, can you identify it with direct quotes?
Replies from: pjeby↑ comment by pjeby · 2023-05-07T23:37:15.594Z · LW(p) · GW(p)
Direct quotes:
Which seems to give me just as much control[4] over the past as I have over the future.
And the footnote:
whatever I can do to make my world the one with FA in it, I can do to make my world the one with HA in it.
This is only trivially true in the sense of saying "whatever I can do to arrive at McDonalds, I can do to make my world the one where I walked in the direction of McDonalds". This is ordinary reality and nothing to be "bothered" by -- which obviates the original question's apparent presupposition that something weird is going on.
If there's something incoherent or contradictory about "either the propositions 'HA happened, A is the current state, I will choose CA, FA will happen' are all true, or the propositions 'HB happened, B is the current state, I will choose CB, FB will happen' are all true; the ones that aren't all true are all false", can you be specific about what it is?
It's fine so long as HA/A and HB/B are understood to be the events and states during the actual decision-making process, and not referencing anything before that point, i.e.:
- H -> S -> (HA ->A) -> CA -> FA
- H -> S -> (HB ->B) -> CB -> FB
Think of H as events happening in the world, then written onto a read-only SD card labeled "S". At this moment, the contents of S are already fixed. S is then fed into a device which will then operate upon the data and reveal its interpretation of the data by outputting the text "A" or "B". The history of events occurring inside the device will be different according to whatever the content of the SD card was, but the content of the card isn't "revealed" or "chosen" or "controlled" by this process.
How is 'revealing something about the past' retrocausal?
It isn't; but neither is it actually revealing anything about the past that couldn't have been ascertained prior to executing the decision procedure or in parallel with it. The decision procedure can only "reveal" the process and results of the decision procedure itself, since that process and result were not present in the history and state of the world before the procedure began.
I don't know how to clarify this, because I don't understand why you think I am. I do think we can narrow down a 'moment of decision' if we want to, meaning e.g. the point in time where the agent becomes conscious of which action they will take, or when something that looks to us like a point of no return is reached. But obviously the decision process is a process, and I don't get why you think I don't understand or have failed to account for this.
Here is the relevant text from your original post:
State A: Past events HA have happened, current state of the world is A, I will choose CA, future FA will happen.
State B: Past events HB have happened, current state of the world is B, I will choose CB, future FB will happen.
These definitions clearly state "I will choose" -- i.e., the decision process has not yet begun. But if the decision process hasn't yet begun, then there is only one world-state, and thus it is meaningless to give that single state two names (HA/A and HB/B).
Before you choose, you can literally examine any aspect of the current world-state that you like and confirm it to your heart's content. You already know which events have happened and what the state of the world is, so there can't be two such states, and your choice does not "reveal" anything about the world-state that existed prior to the start of the decision process.
This is why I'm describing HA/A and HB/B in your post as incoherent, and assuming that this description must be based on an instantaneous, outside-reality concept of "choice", which seems to be the only way the stated model can make any sense (even in its own terms).
In contrast, if you label every point of the timeline as to what is happening, the only logically coherent timeline is H -> S -> ( H[A/B] -> A/B ) -> C[A/B] -> F[A/B] -- where it's obvious that this is just reality as normal, where the decision procedure neither "chooses" nor "reveals" anything about the history of the world prior to its beginning execution. (IOW, it can only "reveal" or "choose" or "control" the present and future, not the past.)
But if you were using that interpretation, then your original question appears to have no meaning: what would it mean to be bothered that the restaurant you eat at today will "reveal" which way you flipped the coin you used to decide?
Let's look at the mechanism closer:
"My future is FA, because my current state is A." This is standard causality: A causes FA by a sequence of steps that follow the laws of physics.
"My history was HA, because my current state is A." This is anthropic reasoning: technically, it was HA causing A by a sequence of steps, but if we ask "given that I am currently A, how does this limit my possible histories?" the answer might be that only such history is HA.
These two are not the same, but an exact explanation would require explaining exactly what is the difference between the past and the future, how the arrow of time works, etc., which I am not really sure myself how it works, and would probably involve making statements about quantum physics and other complicated things.
It might also work differently in different universes. For example, imagine a deterministic universe of the Game of Life, assuming that it can contain intelligent beings similar to us. For a current state A, there is only one future FA. But there could have been multiple different histories HA that resulted in A. (Or perhaps there was no such history, and the universe was created just now.)
The short version is that for practical purposes, the future and the past, causality and anthropic reasoning, seem to work differently.
Nobody, including me, can know for sure what the choice is until I make it, and the choice depends on chaos. Even if it's technically deterministic, it depends on how I resolve the noise that is emitted from chaos. If there's true randomness in the world then that additionally helps me be the origin of the choice, rather than deterministic noise, but even with only noise from chaos rather than randomness, the rest of the universe cannot possibly know my choice until I stabilize on it because sensitive dependence on initial conditions means that the details that determine how my brain will wiggle around through neural consensus space are unobservable to any other system no matter how superintelligent, and the choice gets to depend on input from my entire brain. In this sense, my brain is still the causal bottleneck through which my choices depend, and my entire brain is the bottleneck; noise from chaos means that if I might have chosen a way that mismatches my full network of preferences, my neurons get a chance to discuss it before settling. Biases and shortcut reasoning bypass this partially, of course.
As a result, even if technically my choice is strictly a logical consequence of my brain state, that logical consequence is not written to the universe until I resolve it, and the chaos means that every physical system besides my brain must retain logical uncertainty about my choice until it is resolved which way my neurons discuss and settle. In a fully deterministic universe, free will is logical hyperstition.
Some interesting resources on the topic. I have watched the videos, but I only skimmed the search results. Bulleted summaries via kagi.com's universal summarizer in 'key moments' mode.
- The critical brain hypothesis suggests that the brain operates near a critical point, similar to a second order phase transition.
- Second order phase transitions are characterized by a continuous change in properties, rather than a sudden jump.
- The Icing model is a simple system that undergoes a second order phase transition and exhibits scale-free behavior.
- Neuronal avalanches, or cascades of activity in networks of neurons, also exhibit scale-free behavior and are thought to be neural analogues to the Icing model.
- The balance between excitation and inhibition is a key factor in determining whether a neural network operates in a subcritical, critical, or supercritical state.
- The branch ratio, or the average number of neurons activated by a single upstream neuron, is a control parameter that governs the transition from decaying to amplifying activity in neural networks.
- The critical point is where the balance between excitation and inhibition is optimal, allowing for efficient information processing in the brain.
- Some links related to this summary on metaphor.systems - the ones I opened and skimmed:
- https://en.wikipedia.org/wiki/Critical_brain_hypothesis - very short
- Why Brain Criticality Is Clinically Relevant: A Scoping Review - interesting overview paper
- Self-organized criticality as a fundamental property of neural systems - covered by the video thoroughly, but solid
- Consciousness is a functional system that involves self-monitoring and hierarchical structure.
- Consciousness is a complex dynamical system that emerges from the brain.
- Embodiment plays a significant role in determining the kinds of conscious experience that we have.
- Mental states are physical states.
- There is an asymmetry between internal representations and what can be conveyed to others.
- Chaotic dynamical systems are deterministic and unpredictable, making the question of free will unanswerable.
- More metaphor.systems related results - the ones I opened and skimmed:
- https://desirism.fandom.com/wiki/Brain-state_theories - hmm interesting site, I wonder if it's any good
- https://desirism.fandom.com/wiki/Free_will
- The Hard Problem of Consciousness and the Free Energy Principle - not my favorite argument, the free energy principle is a coherent basis for doing further reasoning but is not in and of itself an argument for any particular view
- https://en.wikipedia.org/wiki/Neuroscience_of_free_will - interesting, lots of stuff in here I didn't know, I'll have to go back over this at some point
- https://plato.stanford.edu/entries/consciousness-neuroscience/ - looks very promising, but again, skimmed
If it's new to you, I'd also suggest an overview of chaos theory:
Or if 10 minutes is a bit long, here's a 1 minute animation showing divergence among chaotic trajectories that start out coherent; there's a moment at :26 where the pendulums lose sync, briefly all at the same edge of stability; however, this is not a chaotic system which seeks the edge of stability, and the pendulums quickly fall in different directions off the saddle point. in contrast a system at the edge of chaos is on a saddle point at almost all times!
What compatibilists standardly mean by a free choice is a choice that is not forced or hindered. Neither of your choices is clearly free in that sense.
Which seems to give me just as much control[4] over the past as I have over the future.
Ok, but that could be zero., in both cases. Controlling the future, in the sense of being able to steer towards different possible futures, is specifically whats missing from compatibilist as opposed to libertarian free will.
I think what most people are trying to point at when they talk about free will is something along the lines of ‘ability to do otherwise’ in the sense that, when looking at a choice in retrospect, we would say a person ‘had the ability to do otherwise’ than they actually did.
Compatibilism is only able to account for CHDO in the weak sense that you weren't being forced to do one particular thing by another agent. Nonetheless, only one decision was ever possible, given determinism.
↑ comment by tslarm · 2023-05-05T15:59:13.921Z · LW(p) · GW(p)
What compatibilists standardly mean by a free choice is a choice that is not forced or hindered. Neither of your choices is clearly free in that sense.
To clarify: I meant to refer to a choice that is free from the kinds of hindrance or coercive influence that would render it 'unfree' in the compatibilist sense.
Ok, but that could be zero., in both cases. Controlling the future, in the sense of being able to steer towards different possible futures, is specifically whats missing from compatibilist as opposed to libertarian free will.
Are you a compatibilist yourself? Because I expected most compatibilists to hold that we do in some important sense, though of course not the libertarian one, influence the future via our choices. And I was looking to better understand why the sense in which we can influence the future is strong enough to be a good match for the concept 'free will', while the sense in which we can influence the past is presumably non-existent or too weak to worry about. (Or, if we can in some important sense influence the past, why that doesn't bother them.)
Replies from: TAG, None↑ comment by TAG · 2023-05-05T18:48:17.707Z · LW(p) · GW(p)
Are you a compatibilist yourself? Because I expected most compatibilists to hold that we do in some important sense, though of course not the libertarian one, influence the future via our choices.
I'm not a compatibilist , and I reject compatibilism because it can't explain that kind of issue.
Theres a standard and often repeated response made by the compatibilists here, that along the lines of "the future depends on your decisions because it won't happen without you making the decision ".
Under determinism, events still need to be caused,and your (determined) actions can be part of the cause of a future state that is itself determined, that has probability 1.0. Determinism allows you to cause the future ,but it doesn't allow you to control the future in any sense other than causing it. It allows, in a purely theoretical sense "if I had made choice b instead of choice a, then future B would have happened instead of future A" ... but without the ability to have actually chosen b.
↑ comment by [deleted] · 2023-05-05T18:56:12.867Z · LW(p) · GW(p)
I expected most compatibilists to hold that we do in some important sense, though of course not the libertarian one, influence the future via our choices. And I was looking to better understand why the sense in which we can influence the future is strong enough to be a good match for the concept 'free will', while the sense in which we can influence the past is presumably non-existent or too weak to worry about
What is missing here is a definition of 'people' to determine how we are effective causes of anything.
When you adopt a compatibilist view, you are already implicitly accepting a deflationary view of free will. There's no interesting sense in which people cause things to happen 'fundamentally' (non-arbitrarily, it's a matter of setting boundaries), the idea of compatibilism is just to lay down a foundation for moral responsibility. They are talking past each other in a way. It becomes a discussion on semantics.
The different deflationary conceptions of free will are mostly just trying to repurpose the expression 'free will' to fit it for the needs of our society and our 'naive' understanding of people's behavior.
Sure, our predispositions bias the distribution of possible actions that we're gonna take such that, counterfactually, if we had different predispositions, we would have acted differently. That's all there is to it.
Another different thing is what is a mechanistic explanation of choice-making in our brains, but compatibilism is largely agnostic to that.
9 comments
Comments sorted by top scores.
comment by Rafael Harth (sil-ver) · 2023-05-05T19:12:52.587Z · LW(p) · GW(p)
LW is a known hotbed of compatibilism, so here's my question:
That's not been my impression. I would have summarized it more as "LW (a) agrees that LFW doesn't exist and (b) understands that debating compatibilism doesn't make sense because it's just a matter of definition"
Personally, I certainly don't consider myself a compatibilist (though this is really just a matter of preference since there are no factual disagreements). My brief answer to "does free will exist" is "no". The longer answer is the within-physics stick figure drawing.
Replies from: tslarm↑ comment by tslarm · 2023-05-06T05:34:44.797Z · LW(p) · GW(p)
Perhaps what's really going on to give me that impression is:
- LW is confident that libertarian free will is incoherent or at least non-existent
- so when people here talk (explicitly or implicitly) about exercising free will, they usually mean it in the compatibilist sense, and often treat that as the only possible sense
Which doesn't actually imply that a high proportion would identify themselves as compatibilists. (I thought there would be survey results to clear this up, but all I could find with a quick search was a very old one with only hard-to-decode raw data accessible.)
comment by Shmi (shminux) · 2023-05-05T03:58:28.169Z · LW(p) · GW(p)
I am not a compatibilist, so not my answer, but Sean Carroll says, in his usual fashion, that free will is an emergent phenomenon, akin to Dennett's intentional stance. This AMA has an in-depth discussion https://www.preposterousuniverse.com/podcast/2021/05/13/ama-may-2021/. I bolded his definition at the very end.
Replies from: TAGwhether you’re a compatibilist or an in compatibilist has nothing at all to do with whether the laws of physics are deterministic. I cannot possibly emphasize this enough. What matters is that there are laws. Whether those laws are deterministic or not makes zero difference to whether you’re a compatibilist or an in compatibilist. You both believe that there are laws, okay.
Whether you believe the fundamental laws are a pilot wave theory of quantum mechanics or a spontaneous collapse theory of quantum mechanics, one of which is deterministic and one of which is not, who cares? That doesn’t affect whether you’re a compatibilist or not, so don’t label yourself a hard determinist, that is not the point. You would still be anti-free will if you’re an incompatibilist, even if the GRW theory of quantum mechanics or Penrose’s theory turns out to be correct, even if determinism is not there. It’s the fact that there are laws that matters.
The compatibilist, what are you compatible about? You’re saying that the belief in free will, that describing human beings as agents that make decisions, that make choices, is compatible with human beings being made of either neurons or elementary particles or whatever you want that obey the laws of physics. A compatibilist says you can describe the world in different ways that are compatible with each other, even though they use very different vocabularies.
One way of describing the world is sort of at the microscopic level where you’re made of a bunch of particles. They’re obeying the laws of physics, whatever those laws, are deterministic or indeterministic, and there’s no free will there. Free will does not enter the Lagrangian for the standard model of particle physics, okay. No one thinks it does. And then there’s another level, there’s a biological level, and then there’s finally a human level where you have people, okay, and the compatibilist says, people make choices, this doesn’t seem like a very controversial thing to say, but apparently it is.
So here’s one way of thinking about it. Alice and Bob are in a car. Alice is driving, Bob is navigating with his Google Maps and they’re looking for a restaurant, and Bob says, oh, turn left up here at this intersection, the restaurant will be right there once we turn left. Alice turns left and there’s no restaurant there. And Alice says, what’s going on? You told me to turn left. I turned left because you told me to turn left. And Bob says, yeah, no, I knew the restaurant wasn’t there, but the laws of physics said that that would be what I would say, and that’s what you would do, so I’m not really to blame for anything that happened.
Nobody in their right mind talks that way. Everyone in the world who is not crazy says Alice made a choice to turn left, why? Because Bob told her to turn left and she trusted that Bob was going to give her the correct information, right? Bob made a choice to tell Alice something, why? Well, I don’t know, there’s something perverse in Bob’s mind that made him play a little game or something like that. Literally nobody refuses to talk that way in the real world. Now, there are people who pretend to not talk that way, they will say, oh no, Alice and Bob didn’t make any choices, but when it comes right down to it, these people are constantly trying to convince you to choose to not believe in free will.
So you can’t act that way, you can’t live in the world, because it’s not a good description of the world at the human level to act as if human beings don’t make choices. The reason why compatibilists think that it’s sensible to talk about human beings as agents that make choices is because that’s the best theory we have of people, and it literally is everyone’s theory. There’s no one who doesn’t have that theory, because it works, it’s true. And I did the podcast with Sam Harris a long time ago, so to be, that’s a little frustration that comes out from me, and I will vent my frustration here, and it’s not just Sam Harris, it’s many other people.
I don’t think I have ever met an incompatibilist who could correctly describe to me what compatibilists think. There’s… Only straw compatibilists live in the mind of incompatibilists, and the incompatibilists seem to think that if they really… If you really believe in the laws of physics, they can construct a logical cage to get you to admit that we are made of particles that obey the laws of physics. But I admit that, and the discussion with Sam was incredibly frustrating, ’cause that’s what he was trying to do, he was trying to say like, okay, if I knew all of the laws in all of the particles and I was Laplace’s demon, and he’s pushing against an open door. I admit all that.
If you describe the universe as microscopically in the laws of physics, then it obeys the laws of physics, and there’s no free will there, that’s just not the point from the point of view of a compatibilist. And this is why it’s very important to realize that Laplace’s demon doesn’t exist, and none of us is anywhere close to Laplace’s demon. Now, there are interesting questions to talk about that are not that question. The interesting questions are, and this gets into some of the questions here, what about the edge cases where it becomes less and less useful to describe people as agents making choices based on good reasons? Like what if you are a drug addict or you have some brain damage or something like that, and you’re just… You’re under a compulsion that forces you to do something.
And then I would say, indeed, it becomes less and less useful to describe that person as an agent making rational choices. And we do… We don’t describe those people as robustly as agents making rational choices, so that discussion, the practical level discussion about how to treat people who suffer in different ways from an inability to be a completely rational agent, which we all do, none of us are completely rational, there are degrees of it, so how do you deal with that in the real world? That’s an interesting discussion to have.
But this whether or not to label it free will discussion is to me the most boring thing in the world, because there aren’t people who don’t talk about other people as choice makers in my mind. And if you are someone who believes in the laws of physics deep down, but you say, but I will, of course, in my everyday life, I will talk about people making choices, then there’s a word for what you are, it’s called compatibilism. That’s what you are.
↑ comment by TAG · 2023-05-05T12:02:37.963Z · LW(p) · GW(p)
whether you’re a compatibilist or an in compatibilist has nothing at all to do with whether the laws of physics are deterministic.
Yes. It's a conceptual issue to do with what "free will" means ... and a physicist would have no special insight into that.
You’re saying that the belief in free will, that describing human beings as agents that make decisions, that make choices,
"Making choices" is setting the bar very low indeed. I don't think Carrol undestands libertarians too well.
There are a number of main concerns about free will:
-
Concerns about conscious volition, whether your actions are decided consciously or unconsciously.
-
Concerns about moral responsibility, punishment and reward.
-
Concerns about "elbow room", the ability to "have done other wise", regret about the past, whether and in what sense it is possible to change the future.
And this is why it’s very important to realize that Laplace’s demon doesn’t exist, and none of us is anywhere close to Laplace’s demon.
Which has no bearing at all on the existence of determinism, or free will.
Determinism also needs to be distinguished from predictability. A universe that unfolds deterministically is a universe that can be predicted by an omniscient being which can both capture a snapshot of all the causally relevant events, and have a perfect knowledge of the laws of physics.
The existence of such a predictor, known as a Laplace's demon is not a prerequisite for the actual existence of determinism, it is just a way of explaining the concept. It is not contradictory to assert that the universe is deterministic but unpredeictable.
If you are unable to make predictions in a deterministic universe, it is still deterministic, and you still lack the ability to have done otherwise in the libertarian sense, so the existence of free will still depends on whether that is conceptually important, which can't be determined by predictability. Predictability does not matter in itself, it matters insofar it relates to determinis m.
comment by Signer · 2023-05-05T08:58:49.739Z · LW(p) · GW(p)
Even if you are not compatibilist, there are certainly some non-free choices (maybe by non-humans or whatever is your criteria) and they would exhibit the same problem.
Replies from: tslarm↑ comment by tslarm · 2023-05-05T09:05:00.180Z · LW(p) · GW(p)
Could you give an example? (I'm not trying to be a smartarse, just trying to make sure I understand the point you're making.)
Replies from: Signer↑ comment by Signer · 2023-05-05T19:50:38.482Z · LW(p) · GW(p)
For example, there is a rubber ball, and the world could be in two states:
State A: Past events HA have happened, current state of the world is A, the ball will fly up, future FA will happen.
State B: Past events HB have happened, current state of the world is B, the ball will fall down, future FB will happen.
When ball moves, it chooses/reveals which of those two states of the world are reality. Which seems to give the ball just as much control over the past as it has over the future.
Replies from: pjeby, tslarm↑ comment by pjeby · 2023-05-06T21:50:52.663Z · LW(p) · GW(p)
The confusion is resolved if you realize that both A and B here are mental simulations. When you observe the ball moving, it allows you to discard some of your simulations, but this doesn't affect the past or future, which already were whatever they were.
To view the ball as affecting the past is to confuse the territory (which already was in some definite state) with your map (which was in a state of uncertainty re: the territory).