The scope of "free will" within biology?

post by Jay_Schweikert · 2011-06-29T06:34:09.175Z · LW · GW · Legacy · 30 comments

Contents

30 comments

I've recently read through Eliezer's sequence on "free will", and I generally found it to be a fairly satisfying resolution/dissolution of the many misunderstandings involved in standard debates about the subject. There's no conflict between saying "your past circumstances determined that you would rush into the burning orphanage" and "you decided to rush into the burning orphanage"; what really matters is the experience of weighing possible options against your emotions and morals, without knowledge of what you will decide, rather than some hypothetical freedom to have done something different, etc. Basically, the experience of deciding between alternatives is real, don't worry too much about nonsense philosophical "free will" debates, just move on and live your life. Fine.

But I'm trying to figure out the best way to conceptualize the idea that certain biological conditions can "inhibit" your "free will," even under a reductionist understanding of the concept. Consider this recent article in The Atlantic called "The Brain on Trial." The basic argument is that we have much less control over ourselves than we think, that biology and upbringing have tremendous influences on our decisions, and that the criminal justice system needs to account for the pervasiveness of biological influence on our actions. On the one hand, duh. The article treats the idea that we are "just" our biology as some kind of big revelation that has only recently been understood:

The crux of the problem is that it no longer makes sense to ask, “To what extent was it his biology, and to what extent was it him?,” because we now understand that there is no meaningful distinction between a person’s biology and his decision-making. They are inseparable.

Is that because we've just now discovered reductionism? If we weren't "just" our biology, what would we be? Magic? Whatever we mean by consciousness and decision-making, I'm sure LW members pretty much all accept that they occur within physics. The author doesn't even seem to fully grasp this point himself, because he states at the end that there "may" be at least some space for free will, independent of our biology, but that we just don't understand it yet:

Free will may exist (it may simply be beyond our current science), but one thing seems clear: if free will does exist, it has little room in which to operate. It can at best be a small factor riding on top of vast neural networks shaped by genes and environment.

Obviously most LW reductionists are going to immediately grasp that "free will" doesn't exist in addition to our neural networks. What would that even mean? It's not "90% neural networks, 10% free will" -- the point is that the process of your neural networks operating normally on a particular decision is what we mean by "free will," at least when we care to use that concept. (If anyone thinks I've stated this incorrectly, feel free to correct me.)

But still, notwithstanding that a lot of this article sort of seems to be missing the point (largely because the author doesn't quite get how obvious the central premise really is), I'm still wrestling with how to understand some of its more specific points, within the reductionist understanding of free will. For example, Charles Whitman, the shooter who killed 13 people from the UT Tower, had written out a suicide note noting that he had recently been the "victim of many unusual and irrational thoughts" and requesting that his brain be examined. An autopsy revealed that he had a large brain tumor that had damaged his amygdala, thus causing emotional and social disturbances. Similarly, in 2000, a man named "Alex" (fake name, but real case) suddenly developed pedophilic impulses at age 40, and was eventually convicted of child molestation. Turns out he also had a brain tumor, and once it was removed, his sexual interests went back to normal. The pedophilic impulses soon returned, and the doctors discovered the tumor had grown back -- they removed it for good, and his behavior went back to normal.

Obviously people like Charles and Alex aren't "victims of their biology" anymore than the rest of us. Nobody's brain has some magic "free will" space that "exempts" the person from biology. But even under the reductionist conception of free will, it still seems like Charles and Alex are somehow "less free" than "normal" people. Even though everyone's decisions are, in some sense, determined by their past circumstances, there still seems to be a meaningful way in which Charles are Alex are less able to make decisions "for themselves" than those of us without brain tumors -- almost as if they had a tick which caused involuntary physical actions, but drawn out over time in patterns, rather than in single bursts. Or to put it differently, where the phrase "your past circumstances determine who you are when you face a choice, you are still the one that decides" holds true for most people, it seems like it doesn't hold true for them. At the very least, it seems like we would certainly be justified in judging Charles and Alex differently from people who don't suffer from brain tumors.

But if we're already committed to the reductionist understanding of free will in the first place, what does this intuition that Charles and Alex are somehow "less free" really mean? Obviously we all have biological impulses that make us more or less inclined to make certain decisions, and that might therefore impede on some ideal conception of "control" over ourselves. But are these impulses qualitatively different from biological conditions that "override" normal decision-making? Is a brain tumor pushing on your amygdala more akin to prison bars that really do inhibit your free will in a purely physical sense, or just a more intense version of genes that give you a slight disposition toward violent behavior?

My intuition is that somewhere along the line here I may be asking a "wrong question," or importing some remnant of a non-biological conception of free will into my thinking. But I can't quite pin this issue down in a way that really resolves the answer in a satisfying way, so I was hoping that some of you might be able to help me reason through this appropriately. Thoughts?

30 comments

Comments sorted by top scores.

comment by Manfred · 2011-06-29T08:18:20.890Z · LW(p) · GW(p)

I'd like to point you, if you haven't already read it, to Yvain's excellent post on disease. In addition to being a metaphor - "free will" is like "disease" in many ways - it's also pretty directly applicable in this case. The causal arrow, rather than going back to you, your brain, goes to the disease instead, making it "not your fault." There are a couple of logical problems there, but hey that's humans for you.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-06-29T18:49:14.360Z · LW(p) · GW(p)

The disease post contains the following especially relevant argument:

So here, at last, is a rule for which diseases we offer sympathy, and which we offer condemnation: if giving condemnation instead of sympathy decreases the incidence of the disease enough to be worth the hurt feelings, condemn; otherwise, sympathize.... Yelling at a cancer patient, shouting "How dare you allow your cells to divide in an uncontrolled manner like this; is that the way your mother raised you??!" will probably make the patient feel pretty awful, but it's not going to cure the cancer. Telling a lazy person "Get up and do some work, you worthless bum," very well might cure the laziness. The cancer is a biological condition immune to social influences; the laziness is a biological condition susceptible to social influences, so we try to socially influence the laziness and not the cancer.

Translating this into the terms of crime by people with brain tumors: does knowing that they will go to prison if they commit the crimes their tumors are making them commit affect their decision? If people with brain tumors commit just as many crimes in countries where they will be imprisoned as in countries where they could make a successful insanity defense, then the punishment doesn't work as deterrence and should be dropped.

comment by fubarobfusco · 2011-06-30T03:59:28.903Z · LW(p) · GW(p)

Consider three possible ways of treating Alex, whose pedophilia appears to be caused by a brain tumor:

  1. Lock him up for the rest of his life, far away from any children he might be tempted to assault.
  2. Punish him — whether by locking him up for some lesser time, or just by giving him forty lashes with a whip, or taking his stuff away — every time he assaults a child.
  3. Remove his brain tumor and check on him periodically to see if it's grown back.

If what you are interested in is preventing children from being assaulted, then either 1 or 3 is clearly superior to 2. Punishment is not reliable at preventing recidivism.

If what you are interested in is restoring Alex to his previous socially positive and non-pedophilic condition, 3 stands far above the other choices.

If what you are interested in is showing to the assaulted children and the public that the evil person who caused them harm is suffering retribution, then 1 or 2 are clearly superior to 3. (Although in the case of 1, you probably have to lock him up forever somewhere unpleasant, not just somewhere without kids.) Free medical treatment is not a clear expression of vengeance.

comment by Morendil · 2011-06-29T07:35:34.056Z · LW(p) · GW(p)

The impulses of Charles and Alex are not qualitatively different, but this is a case of a quantitative difference being big enough that it amounts to a qualitative one!

Most of us have biologically determined impulses to do this or that, but also the (biologically determined) capacity to override these impulses in cases where it would be inappropriate to give in to them. For instance, we can hold a full bladder for some time if we have more important matters we should deal with - and we can give a fully reductionist account of the terms can and should in such a sentence.

In the cases of Charles and Alex, the underlying intuition is that we could "rewind the tape" a thousand or a million times to the initial conditions (that is, after they acquired the tumor but before they committed whatever act we regard as criminal) and, butterfly effect or not, they would still commit their crimes: their behavior is overwhelmingly determined by an identifiable causal factor, the tumor.

In more "ordinary" criminal cases, we imagine that there has been a "struggle of conscience" in the persons concerned that could have gone either way; that is, if you "rewound the tape" to a point in time after the person acquired whatever dispositions are considered relevant but before they committed their crime, you would find that in some significant percentage of these hypothetical scenarios (or "possible worlds") their "conscience" won out and they refrained from the crimes in question, after taking into account facts known to them such as the severity of penalties should they be caught, the harm caused to others, and so on.

One of the factors influencing behavior in such cases, possibly enough to swing the decision, is the individual's awareness of his society's system of penalties and norms for calculating harms, so it makes sense to make this judicial system as clear, as legible, as well-known as possible so that it becomes the deciding factor in a greater number of such "struggles of conscience". But it's unreasonable to have this expectation in the case of a Charles or an Alex.

Replies from: Armok_GoB, Mass_Driver
comment by Armok_GoB · 2011-06-29T10:02:54.886Z · LW(p) · GW(p)

This seems very general: it can cover things like coercion, "stealing bread", and acting out of moral principle. Elegant.

comment by Mass_Driver · 2011-07-12T09:20:32.767Z · LW(p) · GW(p)

What is one to make of the metaphor "struggle of conscience" after a reductionist account of free will? I find most of the standard LW take on free will very persuasive, but it doesn't seem to dissolve the subjective sense that I have of myself as being a brain that sometimes makes an 'effort' to choose one option over another.

When I decided what college to go to, there was no struggle of conscience -- I was simply curious about which option would be more fun and more productive for me, and striving to sort through all of the relevant evidence to maximize my chances of correctly identifying that option.

On the other hand, when I, e.g., decide whether to hit the snooze button, there is a blatant struggle of conscience -- I am not curious at all about whether it would be more fun and productive for me to wake up; I know to a virtual certainty that it would in fact be more fun and productive for me to wake up, and yet it still requires a prolonged and intense effort, as if against some internal enemy, to accomplish the task. What is all of this, in reductionist terms?

comment by timtyler · 2011-06-29T11:35:55.828Z · LW(p) · GW(p)

Is a brain tumor pushing on your amygdala more akin to prison bars that really do inhibit your free will in a purely physical sense, or just a more intense version of genes that give you a slight disposition toward violent behavior?

It seems as though the issues from society's P.O.V. are whether it wants these folk on the streets, whether deterrence will affect their behaviour, and whether it can effectively diagnose and fix them before (or after) they have offended.

comment by malthrin · 2011-06-29T20:03:20.542Z · LW(p) · GW(p)

But if we're already committed to the reductionist understanding of free will in the first place, what does this intuition that Charles and Alex are somehow "less free" really mean?

Glib answer: it means your intuition is faulty.

More serious answer: make a testable prediction. What does it look like when someone is "less free", given the reinterpretation of "free will" as "a planning algorithm based on a 'normal' preference ranking of outcomes"? We may just be hiding the question inside the word 'normal' there, but let's run with it.

Here's an example prediction: someone who's "less free" is not susceptible to persuasion. In a standard H. sapiens, strong social pressure can dramatically reorganize a preference ranking. However, I wouldn't expect persuasion to have much effect in these tumor cases.

My prediction has some obvious holes in it. For example, cryonics advocates defy majority opinion because they're convinced that they're correct and the issue is that important. What I'm trying to convey is the technique - if you think a category boundary exists, but you're not sure precisely where to draw it, put your finger on the page and try to feel the contours of the problem.

Replies from: torekp
comment by torekp · 2011-07-04T21:05:54.701Z · LW(p) · GW(p)

Persuasion susceptibility is important. The English-derived common law defines "free will" in terms of rational behavioral control and knowledge of right and wrong. These tumor sufferers clearly have diminished rational control over their behavior.

In a separate comment, FAWS mentions "mental preference calculation the person is at least tangentially aware of, what sorts of factors appear in that calculation, and how much influence particular factors have on the outcome." That is another way of specifying rationality as required in the law.

comment by FAWS · 2011-06-29T10:46:22.139Z · LW(p) · GW(p)

"Free will" could be described as a matter of whether actions are based on some sort of mental preference calculation the person is at least tangentially aware of, what sorts of factors appear in that calculation, and how much influence particular factors have on the outcome. Punishing someone for behavior when there clearly wasn't any calculation open to influence through negative incentives involved doesn't make much sense. Ideally (if everyone was perfectly rational and perfectly transparent) the only punishment needed would be counter-factual.

comment by Armok_GoB · 2011-06-29T09:55:17.024Z · LW(p) · GW(p)

The versions of themselves with the tumours would always if able self modify to no longer have the tumours, but the ones without tumours would never self modify into having tumours.

comment by Peterdjones · 2011-06-29T16:34:34.741Z · LW(p) · GW(p)

Evolutionary psychologists dislike the distinction between "genetics" and "the environment" on the grounds that the environment is mostly biological, and, therefore, ultimately genetic. They prefer to distinguish between 'closed' biological things which vary only by genetic mechanisms (eg sight), and 'open' biological things which vary with the environment (eg language). A third level of variation could be self-modification. The environment versus genetics debate tends to be framed within the question of what does the determining. However, the ability to self-modify looks like what is needed for a compatibilist faculty of free will, and the ability to indeterministically self modify looks like what is needed for libertarian free will.

ETA Therefore, to say of a neural net that is it is "biological" is missing the point. Is it self-modifiable, environmentally modifiable, or genetically fixed? Environmental modification can be taken for granted, since it would be little use otherwise. So we have free will in some sense if we have self-modification abilitites. And neither how much we have, nor what kind we have is given by the observation that we are "biological".

comment by RolfAndreassen · 2011-06-30T21:55:59.942Z · LW(p) · GW(p)

Isn't this just a question about what we intuitively consider "part of ourselves"? All physical things influence our actions in one way or another, if nothing else through their gravitational pull on the atoms of our brains. Some of them, however, are running algorithms that we consider part of ourselves, such as desire, memory, and logical thought. Usually those parts, ie normally-functioning brains, are the largest influence on our actions. In the case of Alex, a bit that we don't intuitively consider "part of Alex", namely the tumour, has acquired an unusually large influence on him; hence the feeling that he has "less free will". And I am not sure that this is wrong; it really does seem reasonable to consider the tumour as an outside actor with an unusually high degree of influence, and to act in accordance with the wishes of those physical things that we consider internal to Alex - ie the rest of his brain, which presumably wants the tumour gone.

comment by scientism · 2011-06-29T09:55:51.510Z · LW(p) · GW(p)

I think you have the beginnings of an answer in your characterisation of free will as "the process of your neural networks operating normally on a particular decision." The keyword being "normally." In both cases you cite the neural networks were functioning abnormally and in the case of Charles (at least) he exhibited some awareness that he was behaving abnormally. Now obviously, normal or abnormal, it's all just biology. The normative component comes in with the concept of free will itself. Generally our cognitive concepts only apply to normal cases - for the obvious reason that they developed to describe normal cases - and there are borderline cases where they begin to break down. It's probably best to think of the concept being inapplicable in those cases rather than imagining a tumour (or whatever) to be somehow infringing on free will.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-29T17:10:55.459Z · LW(p) · GW(p)

If the concept become inapplicable because the implementing mechanism becomes broken, it still makes sense to think of a tumour as encroaching on the mechanism

comment by [deleted] · 2011-10-18T23:43:33.569Z · LW(p) · GW(p)

I have spent some time contemplating same issue for some time and this is what I make of it:

Contra-casual free will is really about certain capabilities, that is being able to see the consequences of you doing X, and act on those predictions. Usually there is no point in applying Justice on small children or the severely mentally retarded because they have no idea what they are doing and in order for justice to be effective the person doing X must understand that doing X will result in punishment, and be able to act upon it. (I guess that you might induce some conditioning by punishing children and the mentally retarded, but I wouldn't refer that to justice). In that sens justice is really pragmatic.

I would say that a person that is "less free" is someone is less able to foresee the consequences of one's actions, and is less able to act according to those predictions. If the tumor-Alex was in fact systematically planing the pedophilia and not of a more compulsive sort, I would deem him guilty in a sens, but since removing the tumor would (hopefully) bring back good ol' Alex, there is really no point in applying any punishment.

Dennett wrote a book on contra-casual-free will that dissolves the question quite nicely: Freedom Evolves if you are interested

comment by wtroyw · 2011-07-11T20:09:12.913Z · LW(p) · GW(p)

'But even under the reductionist conception of free will, it still seems like Charles and Alex are somehow "less free" than "normal" people.'

This is the sentence that really stuck with me. I disagree with the quotation "less free". If I don't believe in free will, then I wouldn't say "less free" in a figurative or literal sense. I would rather say "less normal" than "less free". Because, I think we can all agree, Charles and Alex are NOT normal. Like you said, it's not like they have LESS control than the rest of us, it's more like they're less able to conform to our social norms. It's possible that none of us have no control in what we do or what we think, but we DO ACT and we DO THINK. Our brains are built to learn and if we have had the opportunity to learn, but didn't, that is our responsibility. Responsibility exists whether free will are not mutually exclusive. The illusion of free will may cause some people to think that responsibility doesn't always exist, but the truth is that the issue of responsibility is irrelevant in the issue of free will.

comment by endoself · 2011-06-30T02:37:46.015Z · LW(p) · GW(p)

the phrase "your past circumstances determine who you are when you face a choice, you are still the one that decides" holds true for most people, it seems like it doesn't hold true for them.

Alex before and after developing the tumor have different past circumstances so we can regard them as different people if we see personal identity as high-level concept that can be reduced to whatever low-level explanation gives the most understanding. Here it seems to explain your intuition if you see them as different people, or at least not exactly the same person.

Society likes the original Alex and wants to bring him back. Nothing more is gained from punishing Alex if we can restore his original state, unlike a hypothetical pedophile who would molest children irrespective of having a tumor and who needs to be deterred, rehabilitated, or kept away from society. That person's past and future selves would implement more similar algorithms than Alex's past and future selves.

comment by Perplexed · 2011-06-29T16:40:02.413Z · LW(p) · GW(p)

Similarly, in 2000, a man named "Alex" (fake name, but real case) suddenly developed pedophilic impulses at age 40, and was eventually convicted of child molestation. Turns out he also had a brain tumor, and once it was removed, his sexual interests went back to normal. ...

At the very least, it seems like we would certainly be justified in judging Charles and Alex differently from people who don't suffer from brain tumors.

Alex was not punished for the impulses he felt. Rather, he was punished for molesting a child. Judge Charles and Alex differently, if you wish, because you think you understand the causality better. But don't punish them differently. Actions should be punished, not inclinations. Punish because they did a bad thing, not because they are bad people.

If you justify punishment as deterence, you are still justified in punishing Charles and Alex. Feel sorry for them, if you wish, but don't forget to also feel sorry for their victims. Life sucks sometimes.

Replies from: asr
comment by asr · 2011-06-30T07:48:56.678Z · LW(p) · GW(p)

In general, we don't punish people purely for actions. For most crimes, having the appropriate criminal intent is a required component of the crime. No mens rea, no crime. That's why we don't prosecute people who have a stroke and lose control of their car for vehicular assault.

I think the tumor case is analogous -- the tumor was an unexpected factor, not caused in any direct way by the perpetrator's prior choices or beliefs.

comment by falenas108 · 2011-06-29T08:58:39.455Z · LW(p) · GW(p)

Do we have a working definition of free will? I thought of a possible one while reading this article, but wanted to know if there already is a defined complexity at which a neural network is considered to have free will.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-29T14:46:24.549Z · LW(p) · GW(p)

There is a standard definition of libertarian free will, as follows:-

Free Will is defined as "the power or ability to rationally choose and consciously perform actions, at least some of which are not brought about necessarily and inevitably by external circumstances".

Not that according to this definition:

  1. Free will is not deterministic behaviour. It is not driven by external circumstances.
  2. Nor is free will is randomness or mere caprice. ("Rationally choose and consciously perform").
  3. Free will requires independence from external circumstances. It does not require independence or separation from one's own self. Ones actions must be related to ones thoughts and motives
  4. But not complete independence. Free will does not require that all our actions are free in this sense, only that some actions are not entirely un-free. ("...at least some of which...").
  5. Free will also does not require that any one action is entirely free. In particular, free will is not omnipotence: it does not require an ability to transcend natural laws, only the ability to select actions from what is physically possible.
  6. Free will as defined above does not make any assumptions about the ontological nature of the self/mind/soul. There is a theory, according to which a supernatural soul pulls the strings of the body. That theory is all too often confused with free will. It might be taken as an explanation of free will, but it specifies a kind of mechanism or explanation — not a phenomenon to be explained.

Although this definition can be fulfilled in a physical universe, it can't be fulfilled in any physical universe since it requires indeterminism. If determinism holds, a choice has to be made between adopting compatibilism (basically a legalistic definition) or abandoning free will entirely. The observation that is all physics/biology does not constrain things down very much.

Replies from: Morendil, Morendil
comment by Morendil · 2011-06-29T15:26:42.197Z · LW(p) · GW(p)

The first question that comes to mind is "external to what?" What is the implied boundary?

For instance, a reasonable answer might be "external to our skulls" - but then this excludes, say, a brain tumor from "external circumstances".

I would also take issue with the term "rationally" - if you interpret this liberally, say "a choice is rational if it makes a somewhat appropriate contribution to some of your projects or preferences" then I wouldn't object, but if you mean "rational" in the sense of "not tainted by any known bias" you will end up excluding almost every decision made by almost every member of the human species from your definition of "free will".

(I'm a compatibilist - I've been brought round to that point of view by Dennett's writings - and I guess many LWers would identify with that point of view as well.)

Replies from: Peterdjones, Peterdjones
comment by Peterdjones · 2011-06-29T19:24:13.816Z · LW(p) · GW(p)

I'm a compatibilist - I've been brought round to that point of view by Dennett's writings

So the ability of your decisions to affect the future is not an aspect of will worth having?

Replies from: Morendil
comment by Morendil · 2011-06-29T19:36:25.607Z · LW(p) · GW(p)

Of course my decisions affect the future. It is precisely because the future is determined rather than undetermined that I've come to expect certain reliable effects from deciding things. (For instance, my deciding things reliably causes nerve impulses to be transmitted to my muscles, which transduce my decisions, otherwise mere neural firings, into macroscopic effects.)

If things happened one way or the other "at random" the capacity that we call "free will" couldn't evolve in the first place. Rather, when I drop something midair it falls, always (or near as dammit), and I'm equipped with capacities which exploit this regularity.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-30T00:02:53.428Z · LW(p) · GW(p)

But we don't need strict determinism, since actions and results do not follow with complete reliability anyway; and we don't need determinism at the decision-making stage in order to have it in the carrying-out stage. In two-stage theories of libertarian free will, a more-or-less indeterministic decision-making stage is followed by a more deterministic phase.

Replies from: Morendil
comment by Morendil · 2011-07-01T18:56:13.129Z · LW(p) · GW(p)

That's an interesting model, that I hadn't come across before (even in Dennett, or not with that much clarity); thanks.

OTOH, that doesn't have much bearing on the question of "the ability of your decisions to affect the future" - since even in a deterministic universe my decisions do affect the future (in the sense above), I have no need to hope for indeterminism, in order to have a justified sense of my own free will.

In fact indeterminism seems to me, on the face of it, to undermine my free will insofar as it undermines the coupling between my decisions and their consequences. But I agree that in this two-stage model indeterminism could have upsides and now downsides, if we knew for sure that its effects were (somehow) confined to the alternative-generation phase.

The bad news, though, is that we'd need to come up with an explanation for why an otherwise deterministic universe should contain an indeterminate process at precisely that location.

Replies from: Peterdjones
comment by Peterdjones · 2011-07-01T21:37:30.068Z · LW(p) · GW(p)

since even in a deterministic universe my decisions do affect the future (in the sense above), I have no need to hope for indeterminism, in order to have a justified sense of my own free will.

In a deterministic universe, you cannot say that your decisions affect the future in the sense that the future would be different if they had been different..becuase they would not have been different. In a deterministic universe, decisions are part of the chain of cause and effect, but not a special part.

In fact indeterminism seems to me, on the face of it, to undermine my free will insofar as it undermines the coupling between my decisions and their consequences.

That is not the case in two stage theories.

But I agree that in this two-stage model indeterminism could have upsides and now downsides, if we knew for sure that its effects were (somehow) confined to the alternative-generation phase. The bad news, though, is that we'd need to come up with an explanation for why an otherwise deterministic universe should contain an indeterminate process at precisely that location.

indeterminism may be readily available as noise in the nervous system, an indeterministic behaviour is useful in some game theoretic situations. How can a predator predict what you will do next, if you do not.

comment by Peterdjones · 2011-06-29T16:00:59.280Z · LW(p) · GW(p)

This definition sets out what FW is a capacity or faculty. not, as in compatibilist definitions, which actions are free. Libertarians still think some actions are unfree in the compatibilist/legal sense.

For instance, a reasonable answer might be "external to our skulls" - but then this excludes, say, a brain tumor from "external circumstances".

Externality needs to take history into account. To act on a hypnotic suggestion is not to act freely, even though the proximate cause of your actions is inside your skull, because it got into your head in a way that bypassed rational judgement.

comment by Morendil · 2011-06-29T15:10:09.035Z · LW(p) · GW(p)

Note that this is certainly not the definition most LWers would recognize as "standard". (I'll comment separately on some questions raised by the above - OTOH, I'd prefer we avoid definitional disputes.)