My summary of Eliezer's position on free will

post by Solvent · 2012-02-28T05:53:10.432Z · LW · GW · Legacy · 100 comments

Contents

100 comments

I'm participating in a university course on free will. On the online forum, someone asked me to summarise Eliezer's solution to the free will problem, and I did it like this. Is it accurate in this form? How should I change it?

 

“I'll try to summarise Yudkowsky's argument.

As Anneke pointed out, it's kinda difficult to decide what the concept of free will means. How would particles or humans behave differently if they had free will compared to if they didn't? It doesn't seem like our argument is about what we actually expect to see happening.

This is similar to arguing about whether a tree falling in a deserted forest makes any noise. If two people are arguing about this, they probably agree that if we put a microphone in the forest, it would pick up vibrations. And they also agree that no-one is having the sense experience of hearing the tree fall. So they're arguing over what 'sound' means. Yudkowsky proposes a psychological reason why people may have that particular confusion, based on how human brains work.

So with respect to free will, we can instead ask the question, “Why would humans feel like they have free will?” If we can answer this well enough, then hopefully we can dissolve the original question.

It feels like I choose between some of my possible futures. I can imagine waking up tomorrow and going to my Engineering lecture, or staying in my room and using Facebook. Both of those imaginings feel equally 'possible'.

Humans execute a decision making algorithm which is fairly similar to the following one.

  1. List all your possible actions. For my lecture example, that was “Go to lecture” and “Stay home.”

  2. Predict the state of the universe after pretending that you will take each possible action. We end up with “Buck has learnt stuff but not Facebooked” and “Buck has not learnt stuff but has Facebooked.”

  3. Decide which is your favourite outcome. In this case, I'd rather have learnt stuff. So that's option 2.

  4. Execute the action associated with the best outcome. In this case, I'd go to my lecture.

Note that the above algorithm can be made more complex and powerful, for example by incorporating probability and quantifying your preferences as a utility function.

As humans, our brains need the capacity to pretend that we could choose different things, so that we can imagine the outcomes, and pick effectively. The way our brain implements this is by considering those possible worlds which we could reach through our choices, and by treating them as possible.

So now we have a fairly convincing explanation of why it would feel like we have free will, or the ability to choose between various actions: it's how our decision making algorithm feels from the inside.”

100 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2012-02-28T10:18:31.622Z · LW(p) · GW(p)

As humans, our brains need the capacity to pretend that we could choose different things

This seems wrong, "capacity to pretend" is not it. Rather, we don't know what we'll do, there is no need to pretend that we don't know. What we know (can figure out) is what consequences are anticipated under assumptions of making various hypothetical actions (this might be what you meant by "pretend").

(It's a bit more subtle than that: it's possible to anticipate the decision, but this anticipation doesn't, or shouldn't, play a direct role in selecting the decision, it observes and doesn't determine. So it's possible to know what you'll most likely do without having decided it yet.)

Replies from: torekp, Troshen
comment by torekp · 2012-03-01T03:14:44.878Z · LW(p) · GW(p)

What I think he means by "pretend" is: the capacity to pretend that we are choosing different things; i.e., running each scenario in our heads.

comment by Troshen · 2012-03-01T16:09:18.831Z · LW(p) · GW(p)

This seems to be a much better description of what's going on in my mind when I make a decision. I disagree with Solvent that we have a determnistic alhorithm that has a single outcome.

What we have are conflicting priorities. In the case of running over the squirrel they could be, for example:

Being angry enough to want to hurt something weaker than yourself Not wanting to jerk the steering wheel or brake abruptly while driving, for safety, when a squirrel runs out into the road in front of your car. Wanting to protect animal life.

Other than by experience, you don't know which priority has the greatest weight. Say "Wanting to protect animal life" turns out to have the greatest weight. Then you hit the brakes.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-01T16:28:46.824Z · LW(p) · GW(p)

I disagree with Solvent that we have a deterministic algorithm that has a single outcome.

Not knowing the outcome doesn't mean it's not there. Presence of many "conflicting" parts doesn't mean that their combination doesn't resolve to a single decision deterministically.

Replies from: Troshen
comment by Troshen · 2012-03-01T17:04:12.058Z · LW(p) · GW(p)

Although I see what you're saying, I still disagree. I don't think that we are just inside the algorithm feeling it happen, making us not knowing the outcome and only being observers.

I definitely have a decision loop and input into the process in my own mind. Even if it's only from outside the loop: Dang, I made a bad decision that time. I'll make a better one next time, and then doing it.

And until I take physical outward action the decision algorithm isn't finished. So people can be paralyzed by indecision by competing priorities that have closely similar weights to them. Or they can ignore and not take any choice and move on to other activities that render the previous choice algorithm nebulous and never finished.

I would like to give a more detailed refutation of the idea that our minds have deterministic algorithms. Until you take action it's undetermined, and I think there's choice there. But I don't have the background or the language.

Can anyone suggest further reading?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-01T17:30:13.652Z · LW(p) · GW(p)

See the free will sequence: problem statement, solution. You do determine what happens, but you do that as part of physics, which could as well be deterministic as well, with your decision being determined by that part of physical world that is you. The decision itself, while it's not made, is not part of current-you, but it's determined by current-you, and it is part of the physical world (in the future of current-you), where current-you can't observe it.

comment by ArisKatsaris · 2012-02-28T10:04:28.633Z · LW(p) · GW(p)

I don't think you understand EY's position at all.

The actual argument can be summarized more like this: "If free will means anything, then it must mean our algorithm's ability to determine our actions. Therefore free will is not only compatible with determinism, it's absolutely dependent on determinism. If our mind's state didn't determine our actions, it would be then that there would be no possibility of free will.

The sort of confusion which thinks free will to be incompatible with determinism, derives from people picturing their selves as being restrained by physics instead of being part of physics."

Replies from: Luke_A_Somers, buybuydandavis, Peterdjones
comment by Luke_A_Somers · 2012-02-28T15:20:35.646Z · LW(p) · GW(p)

I'd take that, minus the crucial dependence on determinism. A system can contain stochastic elements and yet be compatible with free will.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-02-29T00:00:59.715Z · LW(p) · GW(p)

The more the randomness in the system, the less your actions are determined by your mind's state, the less you control your actions.

Replies from: Peterdjones, Luke_A_Somers
comment by Peterdjones · 2013-01-23T16:39:59.874Z · LW(p) · GW(p)

It's not obvious that a determiistic system, such as a billiard ball, is in control of its actions just because it is deterministic. Control is making choices between possible courses of actions. If a system is deterministic, the possibilities it considers are merely hypothetical, it if is indeterministic, they are real possibilites that could actually happen. It is not at all clear that the latter is not lack of control.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-23T17:17:12.915Z · LW(p) · GW(p)

It's not obvious that a determiistic system, such as a billiard ball, is in control of its actions just because it is deterministic.

I believe the billiard ball to be a meaningless analogy because billiard balls have no minds, make no considerations over futures, and have no preferences over futures either. As such billiard balls do not "choose" and do not have wills (free or otherwise).

Control is making choices between possible courses of actions.

By "making choices between" do you mean just "having a conscious preference between" or do you mean "affecting the probability (positively or negatively) of each possible action occuring, according to said conscious preferences"?

If a system is deterministic, the possibilities it considers are merely hypothetical, it if is indeterministic, they are real possibilites that could actually happen.

Consider the configuration space of the preferences of a conscious mind A, and the configuration space of action B. For A to control B means for the various possible configurations in the preferences of Mind A to constrain differently the various probability weights in the configuration space of action B.

E.g. if the configuration of my mind is that I'm a "Fringe" fan, this makes it directly more likely that I'll watch the Fringe series finale. So I have control over my personal action of watching the series.

On the other hand I can't control my heartbeat directly. It is still deterministic in a physical sense (indeed more so than me watching Fringe), but its probability is unconstrained by my preferences. So again my conscious mind's state A doesn't constrain the configuration space of B, and I don't have control over my heartbeat.

Lastly, let's consider an effectively indeterministic system like e.g. dice (use quantum dice for the nitpickers). I can throw the dice, and I can hope for a particular number, but "indeterministic" pretty much means by definition that their result aren't determined by a previous state, which includes my preferences. So I have no control over the dice's outcome, no matter how I would prefer one possible state over another.

So, yeah: determinism by itself isn't sufficient -- the core of the issue is how much my preferences determine the probability weights in the configuration space of actions.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-23T17:47:45.865Z · LW(p) · GW(p)

I believe the billiard ball to be a meaningless analogy because billiard balls have no minds, make no considerations over futures, and have no preferences over futures either. As such billiard balls do not "choose" and do not have wills (free or otherwise).

That's kind of what I was getting at.

By "making choices between" do you mean just "having a conscious preference between" or do you mean "affecting the probability (positively or negatively) of each possible action occuring, according to said conscious preferences"?

Neither. The point I went on to is that both count.

Lastly, let's consider an effectively indeterministic system like e.g. dice (use quantum dice for the nitpickers). I can throw the dice, and I can hope for a particular number, but "indeterministic" pretty much means by definition that their result aren't determined by a previous state, which includes my preferences.

That isn't an argument against indeterminism-based FW, if it was meant to be.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-23T18:12:58.199Z · LW(p) · GW(p)

Neither.

Can you then explain what you mean by the phrase "making choices between"?

That isn't an argument against indeterminism-based FW, if it was meant to be.

I'll resummarize my point, and I hope you explain where you disagree with it this time (frankly, this style of discussion, where you don't seem to want to volunteer much information is rather tiring for me)

I know no meaning of "control of A over B" which doesn't have to do with A causally helping determine the probabilities of B's configuration space. The more it affects those probabilities, the more control A has over B. If those probabilities are not determined by A at all, then obviously A has no control over B. So the complete "indeterminism" of an action, means the utter lack of control of A over B.

Can you please tell me where you start disagreeing with the above paragraph?

Replies from: Peterdjones
comment by Peterdjones · 2013-01-23T18:18:49.555Z · LW(p) · GW(p)

Can you then explain what you mean by the phrase "making choices between"?

I should have said neither specifically. It was intended to cover both the more detailed options.

(frankly, this style of discussion, where you don't seem to want to volunteer much information is rather tiring for me)

You haven't straightforwardly answered the question of whether you are arguing against indeterminism based free will.

So the complete "indeterminism" of an action, means the utter lack of control of A over B.

No one is talking about complete indeterminism. Also, a non-deterministic process A can still control B in your sense.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-23T19:59:56.570Z · LW(p) · GW(p)

You haven't straightforwardly answered the question of whether you are arguing against indeterminism based free will.

I consider libertarian free will not only false, I consider it self-contradictory. In short not only it doesn't exist, I don't see how it could possibly exist (for coherent definitions of determinism and free will) in even a hypothetical universe.

If there's a distinction you're making between libertarian free will and "indeterminism-based" free will, sorry but I'm not aware of the distinction.

No one is talking about complete indeterminism.

Then separate the indeterministic parts of a system from the deterministic parts, and the argument still applies: You can't determine the probabilities of the indeterministic parts, therefore you can't control them, therefore the more indeterministics parts there are, the less becomes your maximum-possible control over the whole.

If you have any control, it must be over the parts and over the extent you can determine the probabilities -- in short the more deterministic something is, the more the maximum-possible control you can determine it is. This again seem pretty self-evident to me.

In short what supporters of libertarian free-will are claiming about determinism (that it would eliminate free will) is actually correct about indeterminism.

Also, a non-deterministic process A can still control B in your sense.

I was talking about A as mind-state, e.g. preferences (values, ethics, etc), not the decision-making process (let's call it D) that connects the preferences and the choice B.

The more the outcome of D is determined by A, the more control those preferences, values, ethics (in short the person) has over B.

This again seems so obvious to me that it seems practically a tautology.

Replies from: Peterdjones, Peterdjones
comment by Peterdjones · 2013-01-24T00:40:46.316Z · LW(p) · GW(p)

I consider libertarian free will not only false, I consider it self-contradictory. In short not only it doesn't exist, I don't see how it could possibly exist (for coherent definitions of determinism and free will) in even a hypothetical universe.

Where;s the argument that the indeterministic model [of libertarian free will] is incoherent?

comment by Peterdjones · 2013-01-23T20:21:41.431Z · LW(p) · GW(p)

If there's a distinction you're making between libertarian free will and "indeterminism-based" free will, sorry but I'm not aware of the distinction

indeterminism based free will is naturalsitic libertarian FW

Then separate the indeterministic parts of a system from the deterministic parts, and the argument still applies: You can't determine the probabilities of the indeterministic parts, therefore you can't control them, therefore the more indeterministics parts there are, the less becomes your maximum-possible control over the whole.

That depends what you mean by "you". That your brain thinks thoughts does not mean that you, the person, are not thinking thoughts. Decisions made by your neural subsystems are made by your, the person. You (some homunculus?) don;t need to pre-think your thoughts for them to be yours, not do you need to pre-choose your choices.

If you have any control, it must be over the parts and over the extent you can determine the probabilities

What does "you" mean there?

in short the more deterministic something is, the more the maximum-possible control you can determine it >is. This again seem pretty self-evident to me.

A deterministic brain might be a nice toy for an immateria homunculujs, but we are dealing with naturalism here. We are dealing with how a system can choose between possible actions. indeterminism means the possibiltieis are real possibilities.

The more the outcome of D is determined by A, the more control those preferences, values, ethics (in short the person) has over B.

But where's the choice?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-23T20:52:16.423Z · LW(p) · GW(p)

That your brain thinks thoughts does not mean that you, the person, are not thinking thoughts. Decisions made by your neural subsystems are made by your, the person.

Of course, that's my whole point. That my brain is making choices doesn't means that I'm not making choices.

If you have any control, it must be over the parts and over the extent you can determine the probabilities

What does "you" mean there?

It doesn't matter for the purpose of the question. No matter how you define yourself, my statement still applies. Personally I'd define it as my personality which includes my preferences, my values, my ways of thinking, etc. But as I said it doesn't matter for the purpose of the question. For any person's definition of "you" the statement still applies.

But where's the choice?

Okay, look. When you say "where's the choice?" I can only understand your question as saying "where's the decision process?" The answer is that the decision process happens physically in your brain.

So "the choice" is very real and physically occurring in your brain.

If you mean something else with choice other than "decision process", then please clarify what you mean.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-23T20:57:29.020Z · LW(p) · GW(p)

Okay, look. When you say "where's the choice?" I can only understand your question as saying "where's the decision process?" The answer is that the decision process happens physically in your brain.

That's not what I mean. I mean that any deterministic process can be divided into stages,such that stage 1 "contriols" stage 2 and so on. But because it is deterministic every probabiity is 1. But choice is choice between options. Where are the other options, the things you could have done but didn't?

Replies from: Vladimir_Nesov, ArisKatsaris
comment by Vladimir_Nesov · 2013-01-23T21:14:46.848Z · LW(p) · GW(p)

But choice is choice between options. Where are the other options, the things you could have done but didn't?

You have subjective uncertainty about what you will do, so you know only of a set of hypothetical actions, given by descriptions that you can use. Even though only one of these will actually take place, your decision algorithm is working with the whole set, it can't work with the actual action in particular, because it doesn't know what it is. So in one sense, "options" may refer to this element of the decision algorithm.

comment by ArisKatsaris · 2013-01-23T21:07:48.662Z · LW(p) · GW(p)

The decision process is a selection between modelled actions and between modelled futures -- it isn't making a selection between actual physical futures, one real and others not.

e.g. If I decide to step forward, but just before I do so, someone pulls me back; my choice was equally real even if I failed to actualize it against my will; my decision process concluded.

Indeed if I'm insane and make a choice to flap my wings and fly, my decision process is still real even if the action I decide to take is physically impossible and my model of my available options is horribly flawed.

So, the "other options", same as the option you pick, they're all representations encoded in your brain, and physically real at that level.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-23T21:09:54.081Z · LW(p) · GW(p)

Thats a description of the deterministic model. Where;s the argument that the indterministic model is incoherent?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-23T21:14:57.967Z · LW(p) · GW(p)

Please post this question in direct response to the comment where I called the indterministic model incoherent, in order to have a cleaner structure in the discussion.

comment by Luke_A_Somers · 2012-02-29T15:13:36.517Z · LW(p) · GW(p)

Yes... and yet, the slightest touch of indeterminism does not immediately wipe out the possibility of free will. You said it was absolutely dependent on determinism. That is false. Was that not clear?

Replies from: ArisKatsaris, Vladimir_Nesov
comment by ArisKatsaris · 2012-02-29T15:42:34.425Z · LW(p) · GW(p)

If I say that a forest fire is absolutely dependent on the presence of oxygen in the atmosphere, it doesn't follow that the "slightest touch" of nitrogen would immediately wipe out the possibility of fires.

And yet the fire would still be absolutely dependent on the presence of oxygen.

Replies from: nshepperd
comment by nshepperd · 2012-03-10T16:42:22.876Z · LW(p) · GW(p)

If "determinism" is taken to mean the theory that the past uniquely and completely determines the future ("hard" determinism?), then the more accurate analogy would be to say that "forest fires are absolutely dependent on an atmosphere of pure oxygen".

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-03-10T19:40:19.748Z · LW(p) · GW(p)

At this point the dispute becomes a linguistic triviality, I think.

My position is as follows: If some elements of a system are deterministic and others non-deterministic, then if free will is expressed anywhere it can only be expressed with the deterministic elements, not with the non-deterministic ones; much as fire is fueled by the oxygen in the atmosphere, not by the nitrogen of the atmosphere.

comment by Vladimir_Nesov · 2012-02-29T15:22:45.556Z · LW(p) · GW(p)

(Control requires presence of determinism, doesn't require absence of randomness. There is no dichotomy in the intended sense.)

comment by buybuydandavis · 2012-02-28T16:47:06.810Z · LW(p) · GW(p)

I think that position your correct (and well put) regardless of what EY may or may not think.

Many people are offended at the thought of being controlled by physics when they are in fact a part of physics.

That answers the more relevant question of "What's all the stupid fuss over this supposed question of free will?" People treat it like it's some big mysterious conundrum, when that feeling of mystery should tell them they are confused and should check their premises.

comment by Peterdjones · 2013-01-23T16:35:59.151Z · LW(p) · GW(p)

"If free will means anything, then it must mean our algorithm's ability to determine our actions. Therefore free will is not only compatible with determinism, it's absolutely dependent on determinism. If our mind's state didn't determine our actions, it would be then that there would be no possibility of free will.

Our ability to determine our decisions need not be deterministic. Not all algorithms are deterministic.

If our mind's state didn't determine our actions, it would be then that there would be no possibility of free wil

See two stage theories. The mind/body system has to reliably put a decision into practice once it has been made, but that doesnt imply the decision-making has to be deterministic.

comment by Swimmy · 2012-02-28T18:41:00.617Z · LW(p) · GW(p)

This, I think, is a major part of it, that it doesn't seem you've accounted for:

The "free will" debate is a confusion because, to answer the question on the grounds of the libertarians is to already cede their position. The question they ask: "Can I make choices, or does physics determine what I do?"

Implicit in that question is a definition of the self that already assumes dualism. The questions treats the self as a ghost in the machine, or a philosophy student of perfect emptiness. The libertarians imagine that we should be able to make decisions not only apart from physics, but apart from anything. They are treating the mind as a blank slate that should be able to take in information and output consequences based on nothing whatsoever.

If, instead, you apply the patternist theory of mind, you start with the self as "an ongoing collection of memories and personality traits." (Simplified, of course.) From that point, you can reduce the question to a reductio ad absurdum. Say that one of my personality traits is a love and compassion for animals, and we're asking the question, "Do I have the free will to run over this squirrel?" Replace "physics" with "personality": Can I make the choice to run over this squirrel, or does my personality decide what I do?

THAT doesn't seem so confusing to us. OF COURSE your personality and memories decide your actions. If you decided to run over the squirrel out of deathlust, you would probably think you've gone temporarily insane or somesuch. You would probably feel as if it wasn't really you who decided to kill the squirrel. It's possible for it to happen, but only if events out of your control come in and zap your mind with the temporary crazies. It is perfectly normal for your decisions to be decided by things that you cannot directly control yourself, and nobody seems to have a problem with this.

The case is no different for physics.

I'd start with that. From there, the explanation of why people get think they have libertarian free will should make more sense. We can imagine ourselves killing the squirrel, which leads us to believe we have libertarian free will. But that is irrelevant: someone who actually chose to kill the squirrel would be a different set of memories and personality traits, and it should not be controversial that they would also be a somewhat different physical makeup.

Replies from: False_Solace, whowhowho, syzygy, Solvent
comment by False_Solace · 2012-03-04T02:06:21.586Z · LW(p) · GW(p)

Can I make the choice to run over this squirrel, or does my personality decide what I do?

Who is "I"? What is there distinct from your personality that would be making this decision? There is suspiciously dualistic language throughout this post.

You would probably feel as if it wasn't really you who decided to kill the squirrel.

You would? You'd really feel like some sort of external being took over? I suppose if a person was highly dissociated they might feel like this.

I think it's more likely you just "wouldn't know" (or wouldn't consciously admit) why you decided to make a decision contrary to your evident personality. The truth would probably be that part of your brain actually liked the idea of splatting a squirrel at that particular moment, but justifying one's actions as a slayer of helpless little squirrels is troublesome and so the decision came to be regretted and disowned by other parts of your cognitive machinery.

Since various studies have shown that unconscious decisions actually precede conscious awareness of a decision, it seems likely that the experience of free will simply provides the conscious mind an opportunity to weave an appropriately believable and self-flattering explanation for behavior one has already determined on executing. I'm drawing mostly on Kurzban in using this sort of language....

Replies from: Swimmy
comment by Swimmy · 2012-03-04T04:27:42.596Z · LW(p) · GW(p)

Apologies for the dualistic language. I am simply not the best writer, and if anyone wants to take a stab at cleaning the point up, I'd be quite happy.

You're right that you probably wouldn't feel like someone else took over. I kind of doubt you'd feel you wouldn't know, either. Or rather: You wouldn't know after the fact, but you probably would know during the fact. It would probably feel like being extremely high, and doing one of the ridiculous things we humans do when we're in those states.

I agree that unconscious decisions usually precede conscious justifications. I figure that these are a large part of what makes a "personality," and might further explain why personalities are so inflexible across time. Unless I'm greatly confused!

comment by whowhowho · 2013-01-24T15:26:27.127Z · LW(p) · GW(p)

The "free will" debate is a confusion because, to answer the question on the grounds of the libertarians is to already cede their position. The question they ask: "Can I make choices, or does physics determine what I do?"

Implicit in that question is a definition of the self that already assumes dualism. The questions treats the self as a ghost in the machine, or a philosophy student of perfect emptiness.

The concern of libertaarians is actually that external events determine what they do. They don't mind their actions being caused by neural events in their brain. A libertarian may accept that they are constituted of physics but that is not the same thing as being determined by physics. Being constituted of physical stuff is on the face of it neutral with regard to determinism and libertarianism.

The libertarians imagine that we should be able to make decisions not only apart from physics, but apart from anything.

No. The phrase typicall used is "not entirely determined by".

comment by syzygy · 2012-03-01T08:00:35.539Z · LW(p) · GW(p)

You mean "libertarian" in the literal sense right? You're not implying that the subject of "free will" has anything to do with politics are you?

Replies from: ArisKatsaris, Swimmy
comment by ArisKatsaris · 2012-03-01T10:59:17.001Z · LW(p) · GW(p)

You mean "libertarian" in the literal sense right?

"literal sense" -- is that the most clear question you can ask? If someone replied 'yes' or 'no', how would you be sure that you'd not both be suffering from a double illusion of transparency regarding what the 'literal sense' of the word was?

Either way, google and wikipedia are your friends: Libertarianism (metaphysics)

comment by Swimmy · 2012-03-01T18:50:10.238Z · LW(p) · GW(p)

Yeah, I meant metaphysical libertarianism.

comment by Solvent · 2012-02-29T04:49:21.598Z · LW(p) · GW(p)

Can I quote this on the course forum?

Replies from: Swimmy
comment by Swimmy · 2012-02-29T07:11:43.747Z · LW(p) · GW(p)

If you like. I'm not sure it's really a good explanation of Eliezer's position, but it's how I figure it.

Replies from: Solvent
comment by Solvent · 2012-02-29T07:30:06.192Z · LW(p) · GW(p)

It's a good point anyway. Thanks.

comment by TheOtherDave · 2012-02-28T15:39:34.387Z · LW(p) · GW(p)

I'm staying out of the EY-exegesis side of this altogether, but a note on your summary in its own voice...

As humans, our brains need the capacity to pretend that we could choose different things, so that we can imagine the outcomes, and pick effectively.

I would say, rather, that the process of imagining different outcomes and selecting one simply is the experience that we treat as the belief that we can choose different things. Or, to put it another way: I don't think we're highly motivated to pretend we could have done something different, so much as we are easily confused about whether we could have or not.

Replies from: Solvent
comment by Solvent · 2012-02-29T04:48:20.245Z · LW(p) · GW(p)

What I was clumsily alluding to there is how we're computing the counterfactual. If we have a (deterministic) decision-making algorithm, it will only ever output one value in a particular situation. However, we have to pretend that it could, so that we can evaluate the outcome of our different actions.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-02-29T13:54:17.897Z · LW(p) · GW(p)

Hm. I might agree with this and I might not, depending on just what you mean.

Simpler example... consider a deterministic chess-playing algorythm that works by brute-force lookahead of possible moves (I realize that real-world chess programs don't really work this way; that's beside my point). There's a (largely metaphorical) sense in which we can say that it pretends to choose among thousands of different moves, even though in fact there was only ever one move its algorithm was ever going to make given that board condition. But it would be a mistake to take literally the connotations of "pretend" in that case, of social image setting and/or self-deception; the chess program does not pretend anything in that sense.

To say that we pretend to choose among possible actions is to use "pretend" in roughly the same way.

If that's consistent with what you're saying, then I'm merely furiously agreeing with you at great length.

comment by andview · 2012-02-28T11:48:21.906Z · LW(p) · GW(p)

Sorry to go off-topic, however I'd like to know how close my understanding of free will and determinism is to reality, or at least to that of Less Wrong.

My understanding is that the world is completely deterministic and the decisions with which we're faced, as well as the choices that we make, are all predetermined (in advance, since the beginning of time - whatever the beginning of time may mean). And even though this is the case, it doesn't mean that we're not fulfilling our preferences at each decision point.

Also, there's nothing spontaneous or random occurring in the world (ever); randomness and spontaneity simply refer to events that are sufficiently unpredictable to a mind labelling them as such.

Please note: My thinking on this has been influenced by Gary Drescher's Good and Real of which I've only read a small part - not even a whole chapter. Also, I'm sorry if this is elementary or if I've missed the relevant discussions on LW, however I only lurk and skim and am yet to read the Sequences. I'm a serious procrastinator, amongst other things.

(If I don't post again, thank you for making LW so amazing - even though it seems that it's all predetermined. :) )

Replies from: Viliam_Bur, Oscar_Cunningham
comment by Viliam_Bur · 2012-02-28T14:22:58.849Z · LW(p) · GW(p)

The word "deterministic" is correct in some sense: there are only laws of nature, no magic. But it brings some incorrect connotations. In a usual discussion the possibilities are framed like this:

a) The universe is a big machine with a lot of wheels. The wheels are rotating, and this is all there is and ever will be.

b) The universe is a big playground of dice, randomly rolling. There is nothing to know about the dice, except that they have some statistical properties.

Of course the choices are usually not expressed this way, but I tried to emphasise the emotions behind them. Essentially, both these pictures seem stupid and give no clue how anything non-trivial could happen in such world. Asking whether the world is deterministic is like saying "pick one of these two models". A wannabe smart person could argue that the first model is compatible with classical physics and the second model with (some intrepretations of) quantum physics.

In my opinion this dilemma is completely irrelevant to discussions about consciousness, free will, etc. The true nature of the universe at the micro level is not necessarily relevant for its macro-level events. A complex pseudo-random generator can be built from perfectly deterministic parts. A huge amount of random events can create a fairly predictable Gaussian curve. So the lawfulness or randomness on human level does not trivially follow the lawfulness or randomness of the elementary particles.

The interesting part is how are the complex things constructed from the small things, because some properties appear and others disappear in the process of construction. Magnetically charged particles create a magnetically neutral atom. Atoms join and make molekules; and depending on the structure and energy of the molecules we have gas, liquid or solid stuff on the macro-level. A macro-level structure of X-es can behave differently than X behaves on the micro level. But this is no magic; it's just a consequence of mathematical laws, although such consequences can be hard to guess.

There are two theoretical sources of (perceived) randomness. 1) Due to laws of thermodynamics our mind can never be perfectly synchronized with the rest of the universe, because we are part of the universe and by doing anything (this includes observing and learnining) we inevitably change it. 2) According to the many-worlds interpretation our universe is constantly splitting into many branches and our copies don't initially know which branch they are in.

But I think that these theoretical limits are irrelevant for everyday life; our typical ignorance is many levels higher than the quantum or thermodynamic effects. We don't know most things simply because we don't observe much and don't remember much, because we are very limited by the structure of our bodies.

comment by Oscar_Cunningham · 2012-02-28T14:19:29.360Z · LW(p) · GW(p)

Yep, everything you've said matches my impression of the "standard" LW view. (Although it gets more confusing when you get quantum physics in the mix.)

comment by Shmi (shminux) · 2012-02-28T07:08:14.615Z · LW(p) · GW(p)

So with respect to free will, we can instead ask the question, “Why would humans feel like they have free will?” If we can answer this well enough, then hopefully we can dissolve the original question.

Not sure about the EY's position, but I find that you are making a significant assumption: that people always feel like they have free will. This is patently false. I would start by trying to imagine how it feels to have no free will. Possible options:

  • You feel compelled to do things because the voices in your head tell you to (i.e. you don't have your own opinion on the matter)

  • You intend to do one thing but find yourself doing something else and feel powerless to change it (e.g. surfing LW instead of studying)

  • You must do what you are told, or else something awful happens (i.e. you have an opinion, but cannot act on it) (suggested by TheOtherDave)

  • You feel that you behave as you please, only to find out that you repeatedly do exactly the same thing in the same circumstances without realizing it (suggested by gwern)

What else can trigger a feeling of having no free will? Which ones are the "true" lack of free will, if any?

Replies from: DanielLC, khafra
comment by DanielLC · 2012-02-28T07:38:09.288Z · LW(p) · GW(p)

but I find that you are making a significant assumption: that people always feel like they have free will.

It sounds to me more like the assumption is that people often feel like they have free will, and usually for the same reason.

Replies from: shminux
comment by Shmi (shminux) · 2012-02-28T20:11:48.691Z · LW(p) · GW(p)

Yes, that is a better way to phrase it.

comment by khafra · 2012-02-28T16:21:43.011Z · LW(p) · GW(p)

"What makes people feel like they have limited or absent free will" is a productive way of rephrasing "why do people feel like they have free will," but I don't think the latter entails a false assumption.

comment by kilobug · 2012-02-28T14:16:36.126Z · LW(p) · GW(p)

They key argument to me in Eliezer's "Free Will" sequence, is the fact that causality doesn't work from past to future, but from past to present and present to future. For the same reason, there is (usually) no way to know the future from the past without simulating the present.

Now, let's apply that to Free Will. You are in a state S (with a knowledge of the world and a set of inputs), you run an algorithm that will decide what action A you'll do.

It is deterministic, so given the state S, something (Omega) can predict what action A you'll do. But by doing so, if he wants to be sure to always reach the same conclusion you would, he'll have to run an algorithm that will always map the same inputs to the same outputs than you do. Said otherwise, because of how determinism works (step by step, not jumping directly from past to future), there is no way to know what you'll do without running an algorithm which is totally equivalent to you - that is, without running you.

Not sure I'm very clear, it's hard to summarize something like that in a few sentence.

Replies from: None
comment by [deleted] · 2012-02-28T15:15:37.489Z · LW(p) · GW(p)

That is incorrect. If I tell you to add up the numbers from 1 to 100 and you start counting, I know by a completely different algorithm that you're going to get 5050. This generalizes: Omega need only prove that the output of your algorithm is the same as the output of a simpler algorithm (without, I may note, running it), and run that instead.

Replies from: asr, kilobug
comment by asr · 2012-02-28T16:49:56.761Z · LW(p) · GW(p)

Omega cannot do this in general. Given an arbitrary algorithm with some asymptotic complexity, there is no general procedure that can get the same result faster.

Computational complexity puts limits even on superintelligences.

Replies from: None
comment by [deleted] · 2012-02-28T17:13:32.980Z · LW(p) · GW(p)

I don't think the speed is essential to my argument, though. The point is that it's possible to determine the output of the algorithm that is you, without running that algorithm.

Replies from: asr
comment by asr · 2012-02-29T06:38:39.047Z · LW(p) · GW(p)

In general, no. To predict the output of an arbitrary algorithm, you have to have equivalent states to the algorithm. If I give you a Turing machine and ask you what its output is, you can't do any better than running it.

You can do various trivial transforms to it and say "no, I'm running a new machine", but it's as expensive as running the original and will have to be effectively isomorphic to it, I suspect.

Replies from: Jonathan_Graehl, None
comment by Jonathan_Graehl · 2012-02-29T23:36:43.741Z · LW(p) · GW(p)

I agree.

But if you're saying that it's proven that "you have to ...", I wasn't aware of that.

Replies from: asr
comment by asr · 2012-03-01T19:47:39.703Z · LW(p) · GW(p)

I don't have a proof of that claim, either, just a strong intuition. I should have specified that more clearly. If some other LWer has a proof in mind, I'd love to see it.

Here are some related things I do have proofs for.

There's no general procedure for figuring out whether two programs do the same thing -- this follows from Rice's theorem. (In fact, this is undecidable for arbitrary context-free languages, let alone more general programs.)

For any given problem, there is some set of asymptotically optimal algorithms, in terms of space or time complexity. And a simulator can't improve on that bound by more than a constant factor. So if you have an optimal algorithm for something, no AI can improve on that.

Now suppose I give you an arbitrary program and promise that there's a more efficient program that produces the same output. There can't be a general way to find the optimal program that produces the answer.*

  • Proof by reduction from the halting problem: Suppose we had a an oracle for minimizing programs. For an arbitrary Turing machine T and input I, create a program P that ignores its input and simulates T on I. If we could minimize this program, we'd have a halting oracle.)
Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2012-03-01T22:31:37.934Z · LW(p) · GW(p)

When you say "optimal" you mean space or time used.

When you say "minimize" you mean "shortest equivalent program"?

I love this list of undecidable problems.

Replies from: asr
comment by asr · 2012-03-02T14:12:17.592Z · LW(p) · GW(p)

I meant to minimize in terms of asympototic time or space complexity, not length-of-program.

comment by [deleted] · 2012-02-29T23:00:12.097Z · LW(p) · GW(p)

Oh. That makes sense (since you can't even tell if the machine will halt or not). I'm still not convinced it applies to free will, but I will not argue the point further since I agree with the conclusion anyway.

comment by kilobug · 2012-02-28T16:01:06.430Z · LW(p) · GW(p)

That's true for simple cases, yes, and that's why I added "usually" in "there is (usually) no way to know the future from the past without simulating the present".

But if you have an algorithm able to produce exactly the same output than I would (say exactly the same things, including talks about free will and consciousness) from the same inputs, then it'll have the same amount of consciousness and free will than I do - or you believe in zombies.

Replies from: None
comment by [deleted] · 2012-02-28T18:08:50.071Z · LW(p) · GW(p)

True, but I think you're making a bigger deal of that, than it is. Suppose our Omega is the one from Newcomb's problem, and all it wants to know is whether you'll one-box or two-box. It doesn't need to run an algorithm that produces the same output as you in all instances. It needs to determine one specific bit of the output you will produce in a specific state S. There is a good chance that a quick scan of your algorithm is enough to figure this out, without needing to simulate anything at all.

The reason this is a big deal is that "free will" means two things to us. On the one hand, it's this philosophical concept. On the other hand, we think of having free will in opposition to being manipulated and coerced into doing something. These are obviously related. But just because we have free will in the philosophical sense, doesn't mean that we have free will in the second sense. So it's important to keep these as separate as possible.

Because Omega can totally play you like a fiddle, you know.

comment by torekp · 2012-03-01T03:25:19.614Z · LW(p) · GW(p)

to pretend that we could choose different things

On the above (emphasis added) - and independent of anything I've seen from EY - beware the modal scope fallacy. It leads to unsound rejections of "could" and "ability" statements.

comment by Joshua Hobbes (Locke) · 2012-02-28T06:37:04.270Z · LW(p) · GW(p)

I'm not seeing how that conclusion is reached. How would we act differently if we did have free will, as opposed to a necessary illusion for decision-making?

Replies from: Giles
comment by Giles · 2012-02-28T21:48:41.925Z · LW(p) · GW(p)

To answer this question we need something like a formal definition of "free will". The position on LW is generally that no such thing can exist, that the concept is confused and that the question "do we have free will?" dissolves into "why do we sometimes think we have something called free will?"

But I think it might be possible to come up with an actual definition, one where "we don't have free will" makes some kind of a testable prediction.

First I'm going to assert that determinism isn't free will, and randomness isn't free will either. The problem is that with a single individual it's hard to imagine any behavior that couldn't be explained as "determined" or "random". So, nothing testable so far.

But what if we have a group of individuals? What about if they suddenly all change their behavior so that they all gather together in the same public place wearing something purple? This won't happen if the behavior is random - it's impossible (or at least extremely likely) for all those people to randomly switch to the same new behavior pattern. What if the behavior is determined? Then we'd expect there to be some identifiable cause - an organized flashmob or something. But if free will exists, it should at least in principle be possible that everyone just happened to decide to do the same thing at the same time.

I haven't even thought this through enough to make it into a LW discussion post, and there are at least two flaws:

  • We need a definition of "identifiable cause". Yudkowsky has provided a definition of causality but it's not one that I intuitively grasp yet so I'm not sure if it can be used here
  • The free willers can just change their definition so that people's behavior can be divided into a causal component and a random/uncorrelated component and still there's room for free will there somewhere
  • Also I haven't even Googled to see if someone's come up with something like this before and/or refuted it
Replies from: whowhowho, ArisKatsaris
comment by whowhowho · 2013-01-24T15:40:57.665Z · LW(p) · GW(p)

To answer this question we need something like a formal definition of "free will". The position on LW is generally that no such thing can exist, that the concept is confused and that the question "do we have free will?" dissolves into "why do we sometimes think we have something called free will?"

If "free will" is meaningless, so is "feeling of free will", etc. Consider "feeling of vubbleflox".

First I'm going to assert that determinism isn't free will, and randomness isn't free will either.

It's uncontentious that neither pure determinism and pure randomness is (libertarian) free wiil. However, some libertarians theories (eg Robert Kane)'s rely on mixutures of determinism and randomness.

The free willers can just change their definition so that people's behavior can be divided into a causal component and a random/uncorrelated component and still there's room for free will there somewhere

As noted abve, that has sort-of happened although no change of definition was needed.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-25T13:53:00.945Z · LW(p) · GW(p)

It's uncontentious that neither pure determinism and pure randomness is free wiil

Why would a compatibilist such as myself have a problem with "pure" determinism being free will?

Replies from: whowhowho
comment by whowhowho · 2013-01-25T13:56:10.031Z · LW(p) · GW(p)

Fair point. Will edit to clarify.

comment by ArisKatsaris · 2012-03-01T01:00:38.814Z · LW(p) · GW(p)

The position on LW is generally that no such thing can exist

Where have you gotten that idea?

Your post seems confused. You seem to be striving to define free will in opposition to both randomness and determinism (so that something must be left over to be filled in by a "free will" component), but you don't indicate any reason why whatever you call "free will" should be opposed to determinism.

Replies from: Giles
comment by Giles · 2012-03-03T18:08:05.354Z · LW(p) · GW(p)

I'm sorry, I wasn't talking about compatibilist theories of free will, only the other kind. I should have made that clear.

comment by Tiiba · 2012-02-29T20:16:40.346Z · LW(p) · GW(p)

I'd like to propose a way for measuring a system's freedom: it is the size of the set of closed-ended goals which it can satisfy from its current state. How's that?

I also think that this is all you really need to not be confused about free will. It's the freedom to do what you will.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-02-29T20:51:59.454Z · LW(p) · GW(p)

By "goals," do you mean goals the system currently has? Or goals the system could in principle have? Or something else?

If the first, it follows that I can increase a system's freedom by installing in that system additional satisfiable goals. Which is perfectly internally consistent, but doesn't quite seem to map to what we ordinarily mean by freedom.

If the second, it follows that if you and I can each achieve N items from that set, we are equally free, even if my N items include everything I want to do and your N items include nothing you want to do. That, too, is perfectly internally consistent, but doesn't quite seem to map to what we ordinarily mean by freedom.

I conclude that our confusions about what we ordinarily mean by freedom aren't quite so readily dissolved. Although it's possible you have some third option in mind that I'm not seeing that eliminates these issues.

comment by [deleted] · 2012-02-28T22:52:52.168Z · LW(p) · GW(p)

How would particles or humans behave differently if they had free will compared to if they didn't?

I actually think that's a great way to approach the problem, if you view emotion and cognition as behavior.

comment by beriukay · 2012-02-28T10:09:14.214Z · LW(p) · GW(p)

Decide which is your favourite outcome. In this case, I'd rather have learnt stuff. So that's option 2.

It looks like you are running on a corrupted system that just chose staying at home.

comment by brilee · 2012-02-28T06:31:49.344Z · LW(p) · GW(p)

Oh.

I tried to figure out what Eliezer's stance on free will was quite a few times, but never really figured out what he meant. This cleared things, thanks!

comment by Peterdjones · 2013-01-23T17:14:34.985Z · LW(p) · GW(p)

So with respect to free will, we can instead ask the question, “Why would humans feel like they have free will?” If we can answer this well enough, then hopefully we can dissolve the original question.

Only we can't. The original question was whether some organisms have the ability to make choices that aren't fully determined by outside circumstances. That isn't addressed by answering the question "why would humans feel like they have free will".

  • a) humans have FW and feel they do

  • b) humans don't have FW, but feel they do

  • c) humans have FW but feel they don't

  • d) humans don't have or feel they have FW

Yudkowsky shows that a way in which (b) could be possible. But he doesn't show that (a) is impossible. IOW,, he doesn't address the original question at all.

Is the original question a bad one which should be replaced? Some approaches to answering the original question are unfathomable (eg the idea of FW as a fundamental tertium datur beyond determinism and indeterminism), others are not. Some naturalistic theories of FW are potentially empirically testable, so throwing out the question involves throwing out a set of empirically respectable theories

ETA: The falling tree question: recognising that different sides in the argument are really using different definitions does dissolve the question. But Yudkowsky's approach to Free Will is not analogous, because there is no side of the debate that unknowingly defines FW as the feeling of being able to make choices as opposed to the ability. EY introduced that definition. (There is a disagreement about compatibilist versus libertarian notions of Free WIll, which I have deliberately omitted for simpliciity, but that is still not anologous to the Falling Tree problem because the various sides are quite aware that their definitions differs. "It all depends what you mean by..")..

Replies from: OrphanWilde, ArisKatsaris
comment by OrphanWilde · 2013-01-23T18:22:46.099Z · LW(p) · GW(p)

"Outside circumstances" including what? Your definition is too vague.

As far as I've been able to tell, the question is confused. Before you ask what it is, first you must define what free will is, in a rigorous and exclusive manner; your definition shouldn't include things you don't want it to include, nor should it exclude things you don't want to exclude. You've managed to include everything you want included, but your definition fails to exclude things you don't want included - namely, your definition includes Eliezer's definition.

Because Eliezer isn't describing how we could experience free will even where free will doesn't exist, he's offering a definition of what free will is. Once you use Eliezer's definition, the confusion goes away - the question becomes meaningless.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-23T18:28:59.342Z · LW(p) · GW(p)

"Outside circumstances" including what?

Anything. If my choices are fully determined by anything outside of me, they are not my choices.

Your definition is too vague.

I didn't specify which outside circumstances because it doens;t matter.

namely, your definition includes Eliezer's definition.

No Feelings aren't abilities. An ability to choose does not concptually include a feeling of freedom.

Because Eliezer isn't describing how we could experience free will even where free will doesn't exist, he's offering a definition of what free will is.

A re-definition. A different definition, Hence he is not answering or disolving the original question.

Once you use Eliezer's definition, the confusion goes away - the question becomes meaningless.

That is false. Once you start using a different definition, you start talking about something else. Changing the subject is not dissolving the question.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-23T18:30:27.687Z · LW(p) · GW(p)

Define "outside of me." Does a proton in your brain count as outside of you? What about a neuron?

Replies from: Peterdjones
comment by Peterdjones · 2013-01-23T18:34:04.397Z · LW(p) · GW(p)

Outside of my control systems, my CNS.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-23T18:36:13.857Z · LW(p) · GW(p)

To what extent? If something outside your nervous system enters into your nervous system - say, LSD - does it qualify as internal? Does it only count if you chose to imbibe it, or would it also count if somebody else forced you to, or if circumstance forced it upon you (say, you consumed something unknowingly)?

Replies from: Peterdjones
comment by Peterdjones · 2013-01-23T19:17:45.293Z · LW(p) · GW(p)

Whatever. I have given as much detail as is needed for a philsophical definition. Definitions aren't theories.

Replies from: OrphanWilde, MugaSofer
comment by OrphanWilde · 2013-01-23T19:22:44.333Z · LW(p) · GW(p)

Intentional preservation of vagueness. You're either a troll or a mystic. I think the "troll" description is probably less insulting in this context.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-23T19:24:54.157Z · LW(p) · GW(p)

Oh good grief. You can call anything vague if you set the bar high enough. Am I being significantly more vague than EY was?

ETA:

Woops, looks the people who wite the Skeptic's Dictionary are mystical trolls too:

"Free will is a concept in traditional philosophy used to refer to the belief that human behavior is not absolutely determined by external causes, but is the result of choices made by an act of will by the agent. "

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-23T19:54:43.288Z · LW(p) · GW(p)

Yes. I understood precisely what Eliezer was referring to.

Whereas I have no idea whatsoever what you're referring to. Elaborating:

You state that the question of free will comes down to: "Whether some organisms have the ability to make choices that aren't fully determined by outside circumstances."

When asked to define "outside circumstances," drilling down, it becomes anything outside the central nervous system.

Which leaves the question in an uncomfortable position whereby it is calling dualism a form of determinism. Indeed, any solution which posits a non-reductionist answer to the question of free will is being called determinism by your definition.

Worse still, your formulation is completely senseless in the reductionist form you've left it; you deny non-reductionist answers, but you implicitly deny all reductionist answers as well, because they've -already- answered your question: No choice happens whatsoever that is "fully determined" by things outside your central nervous system, that denies the very -concept- of reductionism. Your question maintains meaning only as rhetoric. To say Eliezer hasn't answered it in that context is to complain that he didn't preface his arguments with a statement that the brain is the organ which is making these choices.

Which leads me right back to "You have to be trolling."

Replies from: Peterdjones, Peterdjones
comment by Peterdjones · 2013-01-23T20:36:39.158Z · LW(p) · GW(p)

Which leaves the question in an uncomfortable position whereby it is calling dualism a form of determinism. Indeed, any solution which posits a non-reductionist answer to the question of free will is being called determinism by your definition.

A dualist would regard their immaterial mind as internal. I was givin a non-dualist asnwer to the question "what is outside" because I thought there weren't any dualists round here. Are you a dualist? Am I being vague because I correctly anticipated your background assumptions?

Worse still, your formulation is completely senseless in the reductionist form you've left it; you deny non-reductionist answers, but you implicitly deny all reductionist answers as well, because they've -already- answered your question: No choice happens whatsoever that is "fully determined" by things outside your central nervous system, that denies the very -concept- of reductionism.

Events happen that are fully determined by outside events, for instance if someoen pushes you out of a window. We wouldn't call them free choices, but so what? All that means is that I have correctly identified what free choice is about: my definition picks out the set of free choices.

Your question maintains meaning only as rhetoric.

I have no idea what you mean by that.

To say Eliezer hasn't answered it in that context is to complain that he didn't preface his arguments with a statement that the brain is the organ which is making these choices.

He hasn't answered the question of FW because he hasn't said anything at all about whether. or not brains can make choices that are not entirely determined by outside events.

comment by Peterdjones · 2013-01-23T20:36:16.837Z · LW(p) · GW(p)

A dualist would regard their immaterial mind as internal. I was givin a non-dualist asnwer to the question "what is outside" because I thought there weren't any dualists round here. Are you a dualist? Am I being vague because I correctly anticipated your background assumptions?

Worse still, your formulation is completely senseless in the reductionist form you've left it; you deny non-reductionist answers, but you implicitly deny all reductionist answers as well, because they've -already- answered your question: No choice happens whatsoever that is "fully determined" by things outside your central nervous system, that denies the very -concept- of reductionism.

Events happen that are fully determined by outside events, for instance if someoen pushes you out of a window. We wouldn't call them free choices, but so what? All that means is that I have correctly identified what free choice is about: my definition picks out the set of free choices.

Your question maintains meaning only as rhetoric.

I have no ide what you mean by that.

To say Eliezer hasn't answered it in that context is to complain that he didn't preface his arguments with a statement that the brain is the organ which is making these choices.

He hasn't answered the question of FW because he hasn't said anything at all about whether. or not brains can make choices that are not entirely determined by outside events.

comment by MugaSofer · 2013-01-24T11:47:52.818Z · LW(p) · GW(p)

Then philosophical definitions must not be enough to answer questions. Hardly new information.

[Edited for tone.]

comment by ArisKatsaris · 2013-01-23T18:22:19.817Z · LW(p) · GW(p)

But Yudkowsky's approach to Free Will is not ananlogous, because there is no side of the debate that unknowingly defines FW as the feeling of being able to make choices as opposed to the ability

The problem is that one side unknowingly defines "choices" as the ability for a person to make choices and at the same time have the universe not determine those choices, as if the person isn't a subelement of the universe.

Once you realize that the person is a subelement of the universe and that each choice determined by the person is therefore necessarily determined by the universe, the question of free will is dissolved. Yes the past state of the universe determines everything, but that doesn't reduce the extent that the person determines something because the person isn't outside the universe.

Replies from: whowhowho
comment by whowhowho · 2013-01-27T18:23:11.941Z · LW(p) · GW(p)

The one thing I don't remember mentioned is the opposite effect (but maybe I missed it) - if you experienced a failure to acdcomplish something, the free will explanation is likely to make you stop investigating the root cause, leaving it as a mystery.

One side knowingly defines free choices as choices that aren't entirely determined by outside influences.

""Metaphysical freedom [..] one of the two main kinds, involves not being completely governed by deterministic causal laws." --Oxford Companion to Philosophy.

Once you realize that the person is a subelement of the universe and that each choice determined by the person is therefore necessarily determined by the universe,

That makes no sense. Determinism is not true just because everything is part of the universe. Believers in indetrminism don't deny that everything is part of the universe. All you can conclude from the claim that people are made of atoms is that whatever power of choice or voliton they have, however free or unfree, is implemented by atoms. But implementation is not determinism. The claim that people are made of atoms exlcudes supernatural liberrtatian free will, the theory that free will is implemented by some immaterial spirit. It is does not exclude naturalistic libetarian free will or compatibilism. Since it leaved multiple options open it is not "the answer".

Moreover, one should not expect the problem of free will to have a one-line solution that only states something that is already believed by most philosophers (that's philosophers, not theologians).

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-27T18:47:57.761Z · LW(p) · GW(p)

One side knowingly defines free choices as choices that aren't entirely determined by outside influences.

They consider that a single stochastic element in a decision process suffices to make the decision process "free will", even if the stochastic element (to the extent it's stochastic) by definition wouldn't have any causal connection to a person's motivation or values?

People I've argued with on the internet regarding free will tend to believe the opposite, that non-deterministic free will somehow imbues more meaning to their choices, though expressed as in the above paragraph it would clearly imbue less meaning to their choices. (something's meaning is the extent and the ways it's connected to things we value, and random elements aren't)

Determinism is not true just because everything is part of the universe."

I didn't say it was. I said that as each person is part of the universe, therefore "everything determined by the person is determined by the universe".

That by itself would allow some things to be truly random (e.g. if the collapse interpretation of Quantum Mechanics was true), but I was specifically talking about the things that are determined to the extent that they're determined. I don't know how much clearer I can make it than this.

Also, I think this is the last time I respond to this discussion of free will. I think my position has been made as clear as I can make it, and I think your responses (both now and with the previous account) haven't yet provided me with even a single useful counterpoint. So for me to keep on discussing this seems of negative utility.

Replies from: whowhowho
comment by whowhowho · 2013-01-27T20:08:21.304Z · LW(p) · GW(p)

They consider that a single stochastic element in a decision process suffices to make the decision process "free will", even if the stochastic element (to the extent it's stochastic) by definition wouldn't have any causal connection to a person's motivation or values?

Indeterministic choices can have a connection to the agents values that is not deterministically causal. Take 6 things you like doing write them on small pieces of paper, and glue then to a die. However the die lands it will not be against your values. Is that "causal connection"? Maybe, in a broad sense. however, only strict predetermination of the undermined is excluded. That is not enough to bring about complete separation of inderministic choices and values.

People I've argued with on the internet regarding free will tend to believe the opposite, that non-deterministic free will somehow imbues more meaning to their choices, though expressed as in the above paragraph it would clearly imbue less meaning to their choices. (something's meaning is the extent and the ways it's connected to things we value, and random elements aren't)

Since the above is not in fact a problem, inderrministic freedom does lend more meaning to choices. if it is true elements of the future world can be traced back to my decisions in a way that stops there --whereas under determinist I am just one link in a very long chain.

I didn't say it was. I said that as each person is part of the universe, therefore "everything determined by the person is determined by the universe".

That's a non sequitur.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-27T22:07:58.009Z · LW(p) · GW(p)

Okay, I said I wasn't gonna respond again, but I'd like to give you one last hypothetical, and then ask you a question regarding it.

Alice and Bob are taken by aliens and each (separately) given 4 choices arranged in a 2x2 table.
Column A, Row 1: Carl is promoted to a significantly higher-paying position that he'll also be enjoying more
Column A, Row 2: Carl is (unknowingly to him) implanted with a well-designed artificial heart which will be sure to secure his health against all heart-related issues.
Column B, Row 1: Carl is demoted to a significantly lower-paying position that will also be enjoying less.
Column B, Row 2: Carl is (unknowingly to him) implanted with a badly-designed artificial heart which will be sure to worsen his health in regards to heart-related issues.

"Choose Column and Row for the action you want to take" say the Aliens. "Of the ones you choose, please state also which column or row that will be Definite, and which will be Stochastic"
"What do you mean by 'definite' and 'stochastic'?" ask both Alice and Bob.
"We'll definitely do something in the element which you pronounce Definite, but there's only a 55% chance we'll go with the element you deem Stochastic -- we'll be flipping a non-fair coin to determine that one with your choice corresponding to just the more likely side".

Alice does her calculation. Her values and ethics all deterministically argue in favor of giving primary importance to Column A (the 'good' results) -- she's definite about that, nor can she imagine a recognizable self of hers that would choose column B against a random individual. Then she calculates with significantly less certainty that A2 (the better heart cell) seems better than the A1 cell (the better job cell). "For 'Definite', I pick column A, For 'Stochastic' I pick row 2 -- In short the better heart with 55% probability, and the better job with 45% probability " she tells the aliens.
"Apologies", the aliens say, "but the coin went the other way than your preference. and we'll have to do A1 instead -- give Carl the better job instead of the better heart."

Bob does his calculation. He has very strong ethics against people messing with other people's bodies against their will. Even being given unfairly a worse job pales in comparison to the gross aversion Bob has against unconsented medical procedures. So with great definity following deterministically from Bob's values Bob chooses "Row 1" as his primary column. He's significantly less certain about column 1 or column 2. Promoting or Demoting a random individual he's not aware of -- either could be judged fair or unfair if he had knowledge about Carl which he doesn't. With some uncertainty he goes for A1 rather than B1. "For Definite I choose Row 1 (the jobs row). For Stochastic I choose Column A -- in short give him the better job."
"Congratulations, the unfair coin we flipped went with your choice. A1it'll be."

So, after a decision process with both stochastic and deterministic elements, Alice and Bob both ended up causing the selection of A1. But Alice had "A" as the deterministic element, and Bob had "1" as the deterministic element.

Now here's my question: If you had to estimate their characters, values and personalities, wouldn't you be able to attribute more meaning to the Deterministic element, instead of the one left to partial randomness? The partially random element would indeed completely mislead you in regards to Alice's decision process.

"if it is true elements of the future world can be traced back to my decisions in a way that stops there --whereas under determinist I am just one link in a very long chain."

You assign good connotations to "stops there" and bad connotations to "one link in a very long chain". But when I speak about "meaning", I don't mean 'good meaning' or 'bad meaning', I mean the amount of measurable information we can derive from the choice in question. Meaning as a metric which could theoretically be measurable in bits. And there's 0 bits of information that can be derived from a truly random element. But from "one link in a very long chains" we can derive bits of information about both the past and the future -- what the person may have likely done in the past, what they're likely to choose in the future.

Now I'm hopefully done.

Replies from: whowhowho
comment by whowhowho · 2013-02-05T14:06:19.870Z · LW(p) · GW(p)

Now here's my question: If you had to estimate their characters, values and personalities, wouldn't you be able to attribute more meaning to the Deterministic element, instead of the one left to partial randomness? The partially random element would indeed completely mislead you in regards to Alice's decision process.

I don't see how any of that is relevant to FW. Firstly, you are not contrasting deterministic decision making by an individual with stochastic decision making by an individual; the stochastic decision is supplied by someone else. It is not a roll of one's personal die, with ones personal values pasted onto its sides. The selection of choices is arbitrary and unconnected with Alice and Bob's values.

Secondly, your notion of meaning, or information content is one that hinges on how much information an external observver csn get out of someone's else's choice. That is quite orthogonal to the issue of whether FW makes your choices more meaningful to you.

Perhaps you think determinstic decisions are expressive of an individual's psychology, because they can be predicted from an individuals psychology. But if you can predict someone's decisions, why should they believe that have nonetheless made a free choice?

You assign good connotations to "stops there" and bad connotations to "one link in a very long chain". But when I speak about "meaning", I don't mean 'good meaning' or 'bad meaning', I mean the amount of measurable information we can derive from the choice in question. Meaning as a metric which could theoretically be measurable in bits. And there's 0 bits of information that can be derived from a truly random element. But from "one link in a very long chains" we can derive bits of information about both the past and the future -- what the person may have likely done in the past, what they're likely to choose in the future.

And what' that got to do with free choice?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-02-05T17:37:44.559Z · LW(p) · GW(p)

But if you can predict someone's decisions, why should they believe that have nonetheless made a free choice?

Someone being free is always understood to mean something roughly equal to "able to act according to one's own desires", it doesn't mean "unpredictable".

Replies from: whowhowho
comment by whowhowho · 2013-02-05T17:45:28.671Z · LW(p) · GW(p)

Act on desires one happens to have, or act on desires one has originated?

Replies from: ygert
comment by ygert · 2013-02-05T19:05:11.632Z · LW(p) · GW(p)

Can you try to say what the difference is? At this point I think you are tying yourself up in semantic knots.

Replies from: whowhowho
comment by whowhowho · 2013-02-05T19:18:30.004Z · LW(p) · GW(p)

An obvious objection to "one is free if one is able to act according to ones desires" is that ones desire mught be implanted, eg brain washing

Replies from: ygert
comment by ygert · 2013-02-05T20:08:40.584Z · LW(p) · GW(p)

But it is not obvious where the border lies between brainwashing/indoctrination and simply sharing information. If we are discussing a mutual acquaintance (let's call her Alice) and I tell you that she did some not nice action yesterday, you may have a desire to shun her the next time you two meet. Is that desire "your own"?

One could say that it is because you simply used your knowledge of her past actions to decide for yourself that you should shun her. On the other hand, one could say that I basically am controlling your actions, because me telling you what I said has affected your actions.

You can very easily yourself make lots of other borderline cases like this, and in fact they come up in real life very often. Consider the case where parents "indoctrinates" their kids with their religion. When the kid grows up to follow that religion, was it the kid's own choice? Again, we find that the distinction is not complete. If the kid had not been raised to that religion, he likely would not be following it. But this is how most people in the world got their religion. I doubt that you go around to everyone and say that deep down they don't really believe in it... But that is a separate discussion.

Anyway, what I am trying to say, is that for every desire one has originated, their likely was some (external) reason why they have that desire. Like me telling them how nasty Alice had been, or their parents telling them that god exists. (And maybe Alice was nasty, or maybe she wasn't maybe god doesn't exist or maybe he does, but that has no relevance.) In any case, that desire was caused by the outside factor, which shows that it is not very meaningful to try to separate out which desires where caused by outside factors. (As they all are to some extent or another.)

Replies from: whowhowho
comment by whowhowho · 2013-02-06T18:21:48.709Z · LW(p) · GW(p)

But it is not obvious where the border lies between brainwashing/indoctrination and simply sharing information.

Lots of borders aren't obvious. Why should that present a special problem in this case?

One could say that it is because you simply used your knowledge of her past actions to decide for yourself that you should shun her. On the other hand, one could say that I basically am controlling your actions, because me telling you what I said has affected your actions.

Anyway, what I am trying to say, is that for every desire one has originated, their likely was some (external) reason why they have that desire.

I don't see why I should regard a desire as being originated when it also has some deterministic external cause. If, OTOH, if a "reason" is just an influence, or partial cause, then it is compatible with partial origination.

I don't see why I would have to do either. I need both the internal disposition to shun her, and the information. It is not either/or.