Two straw men fighting

post by JanetK · 2010-08-09T08:53:24.636Z · LW · GW · Legacy · 163 comments

For a very long time, philosophy has presented us with two straw men in combat with one another and we are expected to take sides. Both straw men appear to have been proved true and also proved false. The straw men are Determinism and Free Will. I believe that both, in any useful sense, are false. Let me tell a little story.

 

 

Mary's story

 

Mary is walking down the street, just for a walk, without a firm destination. She comes to a T where she must go left or right and she looks down each street finding them about the same. She decides to go left. She feels she has, like a free little birdie, exercised her will without constraint. As she crosses the next intersection she is struck by a car and suffers serious injury. Now she spends much time thinking about how she could have avoided being exactly where she was, when she was. She believes that things have causes and she tries to figure out where a different decision would have given a different outcome and how she could have known to make the alternative decision. 'If only..' ideas crowd into her thoughts. She believes simultaneously that her actions have causes and that there are valid alternatives to her actions. She is using both deterministic logic and free will logic, neither alone leads to 'If only..' scenarios – it takes both. If only she had noticed that the next intersection on the right had traffic lights but on the left didn't. If only she had not noticed the shoe store on the left. What is more she is doing this in order to change some aspect of her decision making so that it will be less likely to put her in hospital, again this is not in keeping with either logic. But really both forms of logic are deeply flawed. What Mary is actually attempting is to do maintenance on her decision making processes so that they can learn whatever is available to be learned from her unfortunate experience.

 

 

What is useless about determinism

 

There is a big difference between being 'in principle' determined and being determined in any useful way. If I accept that all is caused by the laws of physics (and we know these laws – a big if) this does not accomplish much. I still cannot predict events except trivially: in general but not in full detail, in simple not complex situations, extremely shortly into the future rather than longer term, etc. To predict anything really sizable, like for instance, how the earth came to be as it is, or even how little-old-me became what I am, or even why I did a particular thing a moment ago, would take more resources and time than can be found in the life of our universe. Being determined does not mean being predictable. It does not help us to know that our decisions are determined because we still have to actually make the decisions. We cannot just predict what the outcomes of our decisions will be, we really, really have to go through the whole process of making them. We cannot even pretend that decisions are determined until after we have finish making them.

 

 

What is useless about freewill

 

There is a big difference between being free in the legal, political, human rights type of freedom. To be free from particular, named restraints is something we all understand. But the free in 'free will' is a freedom from the cause and effect of the material world. This sort of freedom has to be magical, supernatural, spiritual or the like. That in itself is not a problem for a belief system. It is the idea that something that is not material can act on the material world that is problematic. Unless you have everything spiritual or everything material, you have the problem of interaction. What is the 'lever' that the non-material uses to move the material, or vice versa. It is practically impossible to explain how free will can affect the brain and body. If you say God does it, you have raised a personal problem to a cosmic one but the problem remains – how can the non-physical interact with the physical? Free will is of little use in explaining our decision process. We make our decisions rather than having them dictated to us but it is physical processes in the brain that really do the decision making, not magic. And we want our decisions to be relevant, effective and in contact with the physical world, not ineffective. We actually want a 'lever' on the material world. Decisions taken in some sort of causal vacuum are of no use to us.

 

 

The question we want answered

 

Just because philosophers pose questions and argue various answers does not mean that they are finding answers. No, they are make clear the logical ramifications of questions and each answer. This is a useful function and not to be undervalued, but it is not a process that gives robust answers. As an example, we have Zeno's paradox about the arrow that can never landing because its distance to landing can always be divided in half, but on the other hand, the knowledge that it does actually land. Philosophers used to argue about how to treat this paradox, but they never solved it. It lost its power when mathematics developed the concept of the sum of a infinite series. When the distance is cut in half, so is the time. When the infinite series of remaining distance reaches zero so does the series of time remaining. We do not know how to end an infinite series but we know where it ends and when it ends – on the ground the moment the arrow hits it. The sum of an infinite series can still be considered somewhat paradoxical but as an obscure mathematical question. Generally, philosophers are no longer very interested in the Zeno paradox, certainly not its answer. Philosophy is useful but not because it supplies consensus answers. Mathematics, science and their cousins, like history, supply answers. Philosophy has set up a dichotomy between free will and determinism and explored each idea to exhaustion but not with any consensus about which is correct. That is not the point of philosophy. Science has to rephrase the problem as, 'how exactly are decisions made?' That is the question we need an answer to, a robust consensus answer.

 

 

But here is the rub

 

This move to a scientific answer is disturbing to very many people because the answer is assumed to have effects on our notions of morals, responsibility and identity. Civilization as we know it may fall apart. Exactly how we think we make decisions once we study the question without reference to determinism or freewill seems OK. But if the answer robs us of morals, responsibility or identity, than it is definitely not OK. Some people have the notion that what we should do is just pretend that we have free will, while knowing that our actions are determined. To me this is silly: believe two incompatible and flawed ideas at the same time rather than believe a better, single idea. It reminds me of the solution proposed to deal with Copernicus – use the new calculations while believing that the earth does not revolve. Of course, we do not yet have the scientific answer (far from it) although we think we can see the general gist of it. So we cannot say how it will affect society. I personally feel that it will not affect us negatively but that is just a personal opinion. Neuroscience will continue to grow and we will soon have a very good idea of how we actually make decisions, whether this knowledge is welcomed or not. It is time we stopped worrying about determinism and free will and started preparing ourselves to live with ourselves and others in a new framework.

 

 

Identity, Responsibility, Morals

 

We need to start thinking of ourselves as whole beings, one entity from head to toe: brain and body, past and future, from birth to death. Forgot the ancient religious idea of a mind imprisoned in a body. We have to stop the separation of me and my body, me and my brain. Me has to be all my parts together, working together. Me cannot equate to consciousness alone.

 

Of course I am responsible for absolutely everything I do including something I do while sleep walking. Further a rock that falls from a cliff is responsible for blocking the road. It is what we do about responsibility that differs. We remove the rock but we do not blame or punish it. We try to help the sleep walker overcome the dangers of sleep walking to himself and others. But if I as a normal person hit someone in the face, my responsibility is not greater than the rock or the sleep walker but my treatment will be much, much different. I am expected to maintain my decision-making apparatus in good working order. The way the legal system will work might be a little different from now, but not much. People will be expected to know and follow the rules of society.

 

I think of moral questions as those for which there is no good answer. All courses of action and of inaction are bad in a moral question. Often because the possible answers pit the good of the individual against the good of the group, but also pit different groups and their interests against each other. No matter what we believe about how decisions are made, we are still forced to make them and that includes moral ones. The more we know about decisions, the more likely we are to make moral decisions we are proud of (or least guilty or ashamed of), but there is no guarantee. There is still a likelihood that we will just muddle along trying to find the lesser of two evils with no more success than at present.

 

 

Why should we believe that being closer to the truth or having a more accurate understanding is going to make things worst rather than better? Shouldn't we welcome having a map that is closer to the territory? It is time to be open to ideas outside the artificial determinism/freewill dichotomy.

 

163 comments

Comments sorted by top scores.

comment by cousin_it · 2010-08-10T08:36:36.492Z · LW(p) · GW(p)

Um.

Sometime ago I posted to decision-theory-workshop an idea that may be relevant here. Hopefully it can shed some light on the "solution to free will" generally accepted on LW, which I agree with.

Imagine the following setting for decision theory: a subprogram that wants to "control" the output of a bigger program containing it. So we have a function world() that makes calls to a function agent() (and maybe other logically equivalent copies of it), and agent() can see the source code of everything inclucing itself. We want to write an implementation of agent(), without foreknowledge of what world() looks like, so that it "forces" any world() to return the biggest "possible" answer (scare quotes are intentional).

For example, Newcomb's Problem:

def world():
   box1 = 1000
   box2 = (agent() == 2) ? 0 : 1000000
   return box2 + ((agent() == 2) ? box1 : 0)

Then a possible algorithm for agent() may go as follows. Look for machine-checkable mathematical proofs (up to a specified max length) of theorems of the form "agent()==A implies world()==U" for varying values of A and U. Then, after searching for some time, take the biggest found value of U and return the corresponding A. For example, in Newcomb's Problem there are easy theorems, derivable even without looking at the source code of agent(), that agent()==2 implies world()==1000 and agent()==1 implies world()==1000000.

The reason this algorithm works is very weird, so you might want to read the following more than once. Even though most of the theorems proved by the agent are based on false premises (because it is logically impossible for agent() to return a value other than the one it actually returns), the one specific theorem that leads to maximum U must turn out to be correct, because the agent makes its premise true by outputting A. In other words, an agent implemented like that cannot derive a contradiction from the logically inconsistent premises it uses, because then it would "imagine" it could obtain arbitrarily high utility (a contradiction implies anything, including that), therefore the agent would output the corresponding action, which would prove the Peano axioms inconsistent or something.

To recap: the above describes a perfectly deterministic algorithm, implementable today in any ordinary programming language, that "inspects" an unfamiliar world(), "imagines" itself returning different answers, "chooses" the best one according to projected consequences, and cannot ever "notice" that the other "possible" choices are logically inconsistent with determinism. Even though the other choices are in fact inconsistent, and the agent has absolutely perfect "knowledge" of itself and the world, and as much CPU time as it wants. (All scare quotes are, again, intentional.)

Replies from: JanetK, Blueberry, Peterdjones
comment by JanetK · 2010-08-12T09:02:31.968Z · LW(p) · GW(p)

Is there any way that this applies to me or you making a decision? If it does can you give an indication of how. Thanks.

comment by Blueberry · 2010-08-10T10:14:41.196Z · LW(p) · GW(p)

This is brilliant. This needs to be a top-level post.

Replies from: cousin_it, cousin_it, cousin_it
comment by cousin_it · 2010-08-12T17:48:12.042Z · LW(p) · GW(p)

Done. I'm skeptical that it will get many upvotes, though.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-12T19:05:13.458Z · LW(p) · GW(p)

I'm skeptical that it will get many upvotes, though.

You seem to be either pathologically under-confident (considering that the comment your post was based on was voted up to 9, and people were explicitly asking you to make a top post out of it), or just begging for votes. :)

Replies from: cousin_it
comment by cousin_it · 2010-08-12T19:09:11.618Z · LW(p) · GW(p)

It's a little bit of both, I guess.

comment by cousin_it · 2010-08-10T21:16:50.208Z · LW(p) · GW(p)

I'm nervous about reposting stuff from the workshop list as top-level posts on LW. I'm a pretty minor figure there and it might be seen as grabbing credit for a communal achievement. Yeah, this specific formalization is my idea, which builds on Nesov's idea (ambient control), which builds on Wei Dai's idea (UDT), which builds on Eliezer's idea (TDT). If the others aren't reposting for whatever reason, I don't want to go against the implied norm.

(The recent post about Löbian cooperation wasn't intended for the workshop, but for some reason the discussion there was way more intelligent than here on LW. So I kinda moved there with my math exercises.)

Replies from: jimrandomh
comment by jimrandomh · 2010-08-10T21:43:33.605Z · LW(p) · GW(p)

If the others aren't reposting for whatever reason, I don't want to go against the implied norm.

It is much more likely that people aren't posting because they haven't thought of it or can't be bothered. I too would like to see top-level posts on this topic. And I wouldn't worry about grabbing credit; as long as you putting attributions or links in the expected places, you're fine.

Replies from: cousin_it
comment by cousin_it · 2010-08-10T21:48:36.383Z · LW(p) · GW(p)

Sorry for deleting my comment. I still have some unarticulated doubts, will think more.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-11T07:32:04.606Z · LW(p) · GW(p)

For a bit of background regarding priority from my point of view: the whole idea of ADT was "controlling the logical consequences by deciding which premise to make true", which I then saw to also have been the idea behind UDT (maybe implicitly, Wei never commented on that). Later in the summer I shifted towards thinking about general logical theories, instead of specifically equivalence of programs, as in UDT.

However, as of July, there were two outstanding problems. First, it was unclear what kinds of things are possible to prove from a premise that the agent does X, and so how feasible brute force theories of consequences were as a model of this sort of decision algorithms. Your post showed that in a certain situation it is indeed possible to prove enough to make decisions using only this "let's try to prove what follows" principle.

Second, maybe more importantly, it was very much unclear in what way one should state (the axioms of) a possible decision. There were three candidates to my mind: (1) try to state a possible decision in a weaker way, so that the possible decisions that aren't actual don't produce inconsistent theories, (2) try to ground the concept (theory) of a possible decision in the concept of reality, where the agent was built in the first place, which would serve as a specific guideline for fulfilling (1); and (3) try to live with inconsistency. The last option seemed less and less doable, the first option depended on rather arbitrary choices, and the second is frustratingly hairy.

However, in a thread on decision-theory-workshop, your comments prompted me to make the observation that consequences always appear consistent, that one can't prove absurdity from any possible action, even though consequences are actually inconsistent (which you've reposted in the comment above). This raises the chances for option (3), dealing with inconsistency, although it's still unclear what's going on.

Thus, your input substantially helped with both problems. I'm not overly enthused with the results only because they are still very much incomplete.

comment by cousin_it · 2010-08-10T11:00:31.570Z · LW(p) · GW(p)

Thanks, but after my last post I don't think there's enough informed interest here for this kind of stuff. Pretty much everyone who could take the ideas further is already participating in the workshop. Besides, even though this particular formalization may belong to me, UDT is Wei Dai's idea and I leave it up to him to report our progress elsewhere.

comment by Peterdjones · 2011-04-22T13:22:55.454Z · LW(p) · GW(p)

It is not news that, with ingenuity, (apparent) Alternative Possibilities can be accommodated within determinism. It is even less news that Alternative Possibilities can be accommodated (without the need for ingenuity) within indeterminism. The question is why the determinism based approach is seen around here as "the" solution, when the evidence for the actual existence of (in)determinism remains unclear.

Replies from: cousin_it, AlephNeil
comment by cousin_it · 2011-04-22T14:44:32.617Z · LW(p) · GW(p)

Indeterminism can accommodate "alternate possibilities", but it cannot accommodate meaningful choice between them. As Eliezer said:

My position might perhaps be called "Requiredism." When agency, choice, control, and moral responsibility are cashed out in a sensible way, they require determinism - at least some patches of determinism within the universe. If you choose, and plan, and act, and bring some future into being, in accordance with your desire, then all this requires a lawful sort of reality; you cannot do it amid utter chaos. There must be order over at least over those parts of reality that are being controlled by you. You are within physics, and so you/physics have determined the future. If it were not determined by physics, it could not be determined by you.

Also, starting from "extreme determinism" has been very intellectually fruitful for me. As far as I know, the mathematical part of my comment above (esp. the second to last paragraph) is new - no philosopher had generated it before. If I'm mistaken and your words about it being "not news" have any substance, please give a reference.

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T15:01:55.384Z · LW(p) · GW(p)

"Some patches of determinsim" is perfectly compatible with "some patches of indeterminism". We need more-or-less determinism to carry out decisions, but that does not mean it is required to make them.

The second part of EY;s comment is too vague. If I am being controlled by "physics" outside my body, I am un-free. I am not unconditionally free just because I am physical.

Replies from: cousin_it
comment by cousin_it · 2011-04-22T15:08:26.119Z · LW(p) · GW(p)

We need more-or-less determinism to carry out decisions, but that does not mean it is required to make them.

That sounds inconsistent. What's the relevant difference between the two activities? They look like the same sort of activity to me. Both require making certain things correlate with other things, which is what determinism does. (Carrying out a course of action introduces a correlation between your decision and the outside world; choosing a course of action introduces a correlation between your prior values and your decision.)

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T15:15:01.811Z · LW(p) · GW(p)

The difference is that if we tried to carry out decisions indeterministically, we wouldn't get the results we wanted; and if we made decisions determistically, there would be no real choice.

It's a two stage model

Replies from: cousin_it
comment by cousin_it · 2011-04-22T15:22:04.867Z · LW(p) · GW(p)

if we made decisions determistically, there would be no real choice

I don't understand this statement. Isn't it drawing factual conclusions about the universe based on what sort of choice some philosophers wish to have? Or do you trust the subjective feeling that you have "real choice" without examining it? Both options seem unsatisfactory...

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T15:24:26.054Z · LW(p) · GW(p)

Determinism does not enforce rationality. There are more choices than choices about what to believe. Since naive realism is false, we need to freely and creatively generate hypotheses before testing them.

Replies from: cousin_it
comment by cousin_it · 2011-04-22T15:33:04.274Z · LW(p) · GW(p)

The part of your mind that generates hypotheses is no less deterministic than the part that tests them. (It's not as if they used different types of neurons!) The only difference is that you don't have conscious access to the process that generates hypotheses, so it looks mysterious and you complete the pattern that mysterious=indeterministic. But even though you can't introspect that part of yourself, you can still influence what options it will offer you, e.g. by priming).

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T15:36:49.744Z · LW(p) · GW(p)

Maybe the two stages are in a time domain, not a space domain.

The "it only seems indeterministic" story is one of a number of stories. It is not a fact. My central point is that to arrive at The Answer, all alternatives have to be considered.

Replies from: cousin_it
comment by cousin_it · 2011-04-22T15:51:14.981Z · LW(p) · GW(p)

I was mostly trying to argue against the point that human minds need indeterminism to work as they do. Do you now agree that's wrong?

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T15:57:52.999Z · LW(p) · GW(p)

It's not wrong, and it;'s not intended as a mirror-image of the LW official dogma. It's a suggestion. I cannot possibly say it is The Answer, since, for one thing, I don't know if indeterminsim is actually the case. So my central point remains: the solution space remains unexplored, and what I put forward is an example of a neglected possibillity

comment by AlephNeil · 2011-04-22T14:44:58.211Z · LW(p) · GW(p)

This is equally far from being news:

If physics randomly decides whether an agent in state S at time t will evolve into state A or state B at time t+dt, then the cause of "A rather than B" cannot be the agent's preferences and values, or else these would already have been different at time t. The agent could not be held morally accountable for "A rather than B" (assuming S were known to the judge). Indeterminism being present in the 'cogs and gears' of the agent is more like an erosion of personal autonomy than a foundation for it.

If the 'problem of free will' has a solution (resp. dissolution) at all, then it can be solved (resp. dissolved) under the assumption of physical determinism.

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T14:58:37.390Z · LW(p) · GW(p)

"Physics chooses" is vague. An agents physical state will evolve under the laws of physics whether they are deterministic or not. If an agents state never contained the slightest inkling of committing murder, for instance, then they will not choose to do that --deterministically or not. A choice, random or not, can only be made from the options available, and will depend on their values or preferences.

That FW can be dissolved under determinism does not mean it should be disolved under determinism or disolved at all. A case has to be made for dissolution over solution.

Replies from: AlephNeil
comment by AlephNeil · 2011-04-22T15:25:12.561Z · LW(p) · GW(p)

"Physics chooses [between A with probability p and B with probability 1-p]" is vague.

It means nothing other than "a Laplacean superbeing, given complete knowledge of the prior state and of the laws of physics, would calculate that at time t+dt, the state of the system will either be A with probability p or B with probability 1-p". (You can see why I tried not to write all of that out! Although this may have been unwise given that you've now made me do just that.)

Complete knowledge of the prior state includes complete knowledge of the agent. Hence, there is no property of the agent which explains why A rather than B happens. The Laplacean superbeing has already taken all of the agent's reasons for preferring A (or B) into account in computing its probabilities, so given that those were the probabilities, whatever ultimately happens has nothing to do with the agent's reasons.

You should read chapter VII of Nagel's book The View From Nowhere. He explains very clearly how the problem of free will arises from the tension between the 'internal, subjective' and 'external, objective' views of a decision. From the 'external, objective' view, freedom in the sense you want inevitably disappears regardless of whether physics is deterministic.

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T15:32:29.135Z · LW(p) · GW(p)

The explanation about the Laplacian Daemon does not take into account the fact that the very varied pre-existing states of people's minds/brains has a major influence on their choices. Physics cannot make them choose something they never had in mind. Their choices evolve out of their dispositions under both determinism and indeterminism.

If the choice between A and B is indeterministic, it is indeterministic, but the particular values of A and B come from the particular agent. Whatever happens has a huge amount to do with those reasons since your personified "physics" cannot implant brand new reasons ex nihilo.

I am quite capable of arguing my case against Nagel or anybody else.

Replies from: AlephNeil
comment by AlephNeil · 2011-04-22T15:46:08.668Z · LW(p) · GW(p)

Imagine a 'coarse-grained' view of the agent, where we don't ask what's inside the agent's head. Then the agent has a huge spectrum of possible actions - our uncertainty about the action taken is massive.

Finding out what's inside the agent's head resolves either 'most' or 'all' of the uncertainty, according as physics is indeterministic or deterministic respectively. If physics is indeterministic then some uncertainty remains, and the resolution of this uncertainty cannot be explained by reference to the agent's preferences, and cannot serve as a meaningful basis for freedom.

The point is: that extra bit of uncertainty on the end, which you only get with indeterministic physics, doesn't give any extra scope whatsoever for 'free will' or 'moral responsibility'.

I heartily agree with you that

the very varied pre-existing states of people's minds/brains has a major influence on their choices. Physics cannot make them choose something they never had in mind. Their choices evolve out of their dispositions under both determinism and indeterminism."

I can't figure out why you're making disagreement noises while putting forward the same exact view as mine!

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T15:54:23.129Z · LW(p) · GW(p)

Some irresoluble uncertainty about what an agent will do is the only meaningful basis for freedom. (Other solutions are in fact disolutions) The point is how an agent can have that freedom without complete disconnection of their actions from their character, values, etc. The answer is to pay attention to quantifiers. Some indeterminism does not mean complete indeterminism, and so does not mean complete disconnection.

Replies from: AlephNeil
comment by AlephNeil · 2011-04-22T16:01:46.700Z · LW(p) · GW(p)

Sorry but I think that's confused, for reasons I've already explained.

Honestly, you'd enjoy reading Nagel. If it helps, he's an anti-reductionist just like you, who doesn't think in terms of 'dissolving' philosophical problems.

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T16:10:01.977Z · LW(p) · GW(p)

I didn't say I was anti reductionist. I find this us-and-them stuff rather annoying.

Replies from: AlephNeil
comment by AlephNeil · 2011-04-22T16:16:02.153Z · LW(p) · GW(p)

OK. Replace the word "who" with "in that he" in my previous comment.

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T16:28:08.230Z · LW(p) · GW(p)

I don't mind dissolving prolbems if all else fails. But you cannot reduce everything to nothing.

comment by MichaelVassar · 2010-08-09T19:04:31.142Z · LW(p) · GW(p)

Please try to write posts that show an awareness of the existing literature on the subject from within Less Wrong.

Replies from: kpreid
comment by kpreid · 2010-08-10T04:09:01.247Z · LW(p) · GW(p)

Yes; I think this post has explanations where it ought to have hyperlinks.

comment by Randaly · 2010-08-09T16:58:07.548Z · LW(p) · GW(p)

I generally like this post, and am unsure why it was voted down. However, I think that you need to separate "not useful" from "not true"- while it may or may not be true that neither is particularly useful in real life, under the definitions accepted by LW, both are almost certainly true.

Replies from: JanetK, JanetK
comment by JanetK · 2010-08-09T17:29:13.560Z · LW(p) · GW(p)

I meant to add - thanks for the advice to separate 'not useful' from 'not true'.

comment by JanetK · 2010-08-09T17:27:38.223Z · LW(p) · GW(p)

Do LW people generally think freewill is true? I had thought that we were generally materialist and didn't believe in magic mind stuff. Am I wrong?

Replies from: Randaly, XiXiDu, wedrifid, thomblake
comment by Randaly · 2010-08-09T17:50:45.735Z · LW(p) · GW(p)

I believe that, as far as there is a consensus, it's that compatibilism is correct.

Free will is defined as "your ability to make free choices unconstrained by external agencies." "You" has traditionally been defined as a supernatural "soul;" when it was demonstrated that "you" couldn't have any effect on the world, and probably didn't exist, many people concluded that "you" therefore had no ability to make choices, forgetting that soul-"you" didn't actually exist. Compatibilists take a different path, by redefining you as a physical object, in which case free will becomes true.

Replies from: JanetK
comment by JanetK · 2010-08-10T08:00:20.978Z · LW(p) · GW(p)

If you have defined 'freewill' as being ordinary everyday freedom to make choices without constraint than it is not the philosopher's straw man that I was talking about in the post. It does not imply dualism. This then becomes a semantic rather than a philosophical difference. I want to get rid of the word and you want to redefine it so that it is useful. But you don't need the word. You could just say 'I was free to make a choice.' Most people would think you meant 'free from external constraint'. I believe I said in the post that I was not talking about ordinary freedom from constraint but from the causality of the material world. That was the definition I was using for freewill.

If there are people (you may or may not be one) who cling to the word 'freewill' and redefine it so that they can cling to it, there cannot be too many. Because the replies to this post are the first time I have encountered this new definition with any frequency. Of course, I may not have noticed that someone was using the word in a different way from the usual meaning. This is like the redefinition of God to be something like 'the whole universe' or 'the original cause' in order to not have to admit that they don't actually believe in God. I suppose that many of the people who say they believe in God would not prompt me to find out how vague their concept was.

Replies from: Oligopsony, thomblake
comment by Oligopsony · 2010-08-10T08:20:38.032Z · LW(p) · GW(p)

I don't know how common the "free will is freedom from external constraint" view - it's called compatibilism* - is among the general population. It is, however, the dominant view among professional philosophers.

If you've never so much as heard of compatibilism, I have to question why you wrote an article on the subject of free will. It would be like writing on meta-ethics and pleading ignorance of non-cognitivism or error theory. In the future, consider at least reading the relevant SEP entry!

*Technically, many compatibilists believe that there are conditions other than freedom from external constraint that are necessary for free will. Definitionally none of them would say that indeterminism is one of them, though.

Replies from: JanetK
comment by JanetK · 2010-08-10T09:02:57.822Z · LW(p) · GW(p)

I am confused by the depth of feeling against my fairly mild posting which I thought many LWers would value.

One of the first postings that I read on LW was How an Algorithm Feels from Inside and another was Wrong Questions. I was so impressed that I began reading the blog regularly. What I noticed was that many of the contributors seemed to have a very different idea of what thought was than I had or that I felt those two great postings had. In particular I had trouble with two recurring areas: what is consciousness? and how are decisions made?. I have attempted a post on both. The reception has been equally hostile to both. It appears that I misjudged the group and that there is very little interest in a more scientific approach to these questions.

Consider the post 'dead in the water'.

Replies from: WrongBot, NancyLebovitz
comment by WrongBot · 2010-08-10T15:53:30.686Z · LW(p) · GW(p)

The big problem with your post is that it spends most of its words discussing free will and metaethics without making reference to the substantial material on those topics already posted and discussed on this site. As others have pointed out, not discussing compatibilism has weakened the post as well.

Ultimately, if you were trying to answer the question of how decisions are made, you have should have done so. Too much of the post covered material that wasn't directly related to what you wanted to get at, and this would have been a problem even without the points mentioned above.

On a related note, you didn't include any links in your post. Linking to a definition, discussion or explanation of a concept you're using as a foundation is much better than reinventing the wheel.

All that said, please reconsider abandoning posting on LW. Your comments are frequently worth reading, and your reasoning (if not yet your writing) is usually pretty solid. I'm probably not the best person to make the offer, but I'd be happy to comment on drafts of future posts if you felt that might be useful.

Replies from: JanetK, Randaly
comment by JanetK · 2010-08-11T09:05:24.058Z · LW(p) · GW(p)

Thank you and if I even do post, I will take you up on your offer.

comment by Randaly · 2010-08-10T16:57:58.627Z · LW(p) · GW(p)

I would be happy to comment as well.

(Though I'm almost certainly a far worse choice.)

Replies from: JanetK
comment by JanetK · 2010-08-11T09:05:45.905Z · LW(p) · GW(p)

Thank you and if I even do post, I will take you up on your offer.

comment by NancyLebovitz · 2010-08-10T10:37:48.839Z · LW(p) · GW(p)

I voted this up before reading it carefully. As is usual, admission of having made a mistake should get an upvote-- if I'd read to the end first, I'd have seen the undefined claim that you're using a more scientific approach.

Unfortunately, I don't seem to be able to cancel my upvote, but knocking the comment down to -1 seems too harsh.

Replies from: JanetK, Richard_Kennaway
comment by JanetK · 2010-08-10T13:55:55.258Z · LW(p) · GW(p)

The post in question was a plea to look at and follow the neuroscience of decision making. That was the point. Don't worry about the straw men - just follow the science. I am actually not that interested in freewill and want to get past that to something interesting. When I carefully define how I am using a word (like freewill or like consciousness in the last post) I don't expect to be told that I cannot use the word that way. I was taken back by the reaction, that is all. Here are a bunch of reasonable, rational, intelligent people that I should be able to converse with and they appear to avoid being sensible about neuroscience. Too bad - I can still gain from following the discussions but I cannot give anything to the group except the odd comment, now and then. Don't worry about the up vote - I can avoid ever using it.

comment by Richard_Kennaway · 2010-08-10T10:42:15.988Z · LW(p) · GW(p)

Clicking the "Vote up" link again should remove the vote.

comment by thomblake · 2010-08-10T14:14:13.132Z · LW(p) · GW(p)

The problem here is that you're using "free will" in a weird way. While lots of people who haven't thought about the question think libertarian free will makes sense, and lots of religious philosophers think libertarian free will makes sense, it's definitely not a prevailing view amongst non-religious people who've thought about free will to any great extent. You're ignoring the philosophical literature (about two thousand years worth, in fact), the various posts made on Less Wrong about the subject, and the general consensus of professional philosophers (at least non-religious ones) (who may or may not be a relevant reference class).

Two straw men indeed.

It's as though you've made a post arguing that "Calcium" doesn't exist since obviously it refers to its linguistic roots in alchemy, and scientists should get right on finding out what Calcium really is, and you don't know why anyone thinks that's a silly suggestion.

I don't think anyone here thinks the neuroscience of decision-making is not a fruitful path of research, but this post did nothing of the sort. If you have interesting results to share from your work in that field, please do so - I'm sure there are several other readers who work in the same sort of field who would like to compare notes.

comment by XiXiDu · 2010-08-09T17:32:52.108Z · LW(p) · GW(p)

I haven't read it yet but "this impossible question is fully and completely dissolved on Less Wrong".

I do believe that free will is true, or rather a useful terminology, given my own definition.

‘Free will’ is often defined as want free FROM cause. But why shouldn’t ‘free will’ be defined as want free TO cause?

Any measure of ‘free will’ must be based on the effectiveness and feasibility of consciousness volition opposed to the strenght of the environmental influence. We have to fathom the extent of active adaption of the environment by a system opposed to passive adaption of a system by the the environment. The specific effectiveness and order of transformation by which the self-defined system (you) shapes the outside environment, in which it is embedded, must trump the environmental influence on the defined system. What is essential is that the system has to be goal-oriented and the ability to differentiate itself within the environment in which it is embedded.

What I mean is very simple. If I could get what I want I have had free will. In retrospect the degree of freedom of want is measured by the extent to which I had to adapt my will to environmental circumstances opposed to changing the environment to suit my goals. And basically this is what I mean by ‘free will’. To extent this notion of free will you can ‘measure’ the extent to which one changed his will deliberately, that is consciously, i.e. from within (nonlinear). By nonlinear here I mean a system whose output is not proportional to its input. This is opposed to the ‘persuasion’ of a child by an adult or the contrary affection of one’s will by unwanted, non-self-regulated influence of any kind.

(Edit Note: I'm not the usual highly educated LW reader. This might be a lot of garbage indeed. Ask me about it in a few years again.)

Replies from: JanetK
comment by JanetK · 2010-08-10T08:32:25.821Z · LW(p) · GW(p)

I am sorry. I honestly find it very hard to understand what you are trying to say and more importantly why. Honestly, my fault but I don't get it.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-10T09:08:08.980Z · LW(p) · GW(p)

I'm saying that to talk about free will we first have to define what we mean by 'free will'. Further I give a definition of what I mean, how I define the term 'free will'. I define 'free will' as something universal that gradually exists on different levels. I define 'free will' as a measure of goal realization. That is, free will of a child < adult < superhuman artificial intelligence. Except if you are jailed, you might have less free will than a kid.

I believe that our feeling of being free agents represents the extrapolated and retrospective perception of goal realization and not what is talked about in metaphysics, that our intentions are free from cause. It's rather that our ability to cause, to realize our intentions can be and is gradually perceived to be free.

comment by wedrifid · 2010-08-09T18:04:23.855Z · LW(p) · GW(p)

For my part I think any philosopher (or teacher of philosophy) that trains themselves or their students into considering the truthfulness of freewill deserves a spanking. I'm not sure what the official name for that position is.

Replies from: Jayson_Virissimo, XiXiDu
comment by Jayson_Virissimo · 2010-08-09T18:26:58.976Z · LW(p) · GW(p)

Metaphysicians call that view "libertarianism" (what a confusing name huh?). Basically, libertarianism, is the view that free will and determinism are incompatible, but we have free will, so materialism is false.

Replies from: Oligopsony, wedrifid
comment by Oligopsony · 2010-08-09T18:32:18.326Z · LW(p) · GW(p)

Not all libertarians reject materialism - there is the view (not mine; I'm a compatibilist) that indeterminism in physical laws is sufficient for libertarian free will.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2010-08-10T03:08:08.971Z · LW(p) · GW(p)

You are right. My last sentence should have read:

Basically, libertarianism, is the view that free will and determinism are incompatible, but we have free will, so determinism is false.

comment by wedrifid · 2010-08-09T18:41:57.990Z · LW(p) · GW(p)

(I don't think we are talking about the same thing. My view is, approximately, "contemptuous compatibilism".)

comment by XiXiDu · 2010-08-09T18:59:57.664Z · LW(p) · GW(p)

Why do people always fall back to philosophy when talking about free will. It doesn't need to be a metaphysical concept. It is pretty much a human trait, an attribute of human psychology. We all know we have free will, period.

Determinism is true but thermostats can still control the temperature. And nobody denies that thermostats control the temperature.

— Steven Landsburg paraphrasing Robert Nozick in The Big Questions

This is not a bias, it's part of our subjective definition of being agents that are able to change their environment as it suits them.

Taking an outside view, I absolutely agree. There is no free will, no reasonable definition will fit those two words in succession. But from a inside view, it makes sense to talk about being free to choose.

Anyone who's not sure what I mean I recommend reading this post:

What can we make of someone who says that materialism implies meaninglessness? I can only conclude that if I took them to see Seurat’s painting “A Sunday Afternoon on the Island of La Grande Jatte," they would earnestly ask me what on earth the purpose of all the little dots was.

Replies from: wedrifid
comment by wedrifid · 2010-08-09T19:10:19.264Z · LW(p) · GW(p)

Are you being serious or sarcastic here? I'm confused.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-09T19:35:10.080Z · LW(p) · GW(p)

Now that was a unsettling reply.

I like to believe into a timeless universe. So I guess I'm not completely serious. But all this is quickly leaving the intention of this community. There's probably not much practical value to be found in such musings and beliefs besides a poetic appeal and fun of thinking and dreaming about nonfactual possibilities.

I really have to think more and especially not publicly claim something when I'm too tired. I might consider a tattoo on the back of hands. Think first!

What I rather wanted to say is, it makes sense to talk about being able to realize your goals. Choice doesn't exist, I contradicted myself there. I should quit now and for some time stop participating on LW. I have to continue with my studies. I was only drawn here by the deletion incident. Replies and that it is fun to to argue have made me babble too much in the past few days.

Back to being lurker. Thanks.

Replies from: wedrifid
comment by wedrifid · 2010-08-10T08:25:10.241Z · LW(p) · GW(p)

What I rather wanted to say is, it makes sense to talk about being able to realize your goals.

It certainly does.

comment by thomblake · 2010-08-09T17:37:41.397Z · LW(p) · GW(p)

What XiXiDu said - "free will" is assigned as a problem for aspiring rationalists to solve, and I really don't see a problem with trying to do so in a top-level post, so I voted up this post in hopes of seeing it out of the negatives.

I think the general view around here is vaguely compatibilist, but only in the sense of realizing that the free will question is asking the wrong question, and I'd rather not give away more than that if you haven't worked it out.

Replies from: JanetK
comment by JanetK · 2010-08-10T08:10:11.641Z · LW(p) · GW(p)

I agree that the question is the wrong question. And I assume that my saying forget the straw men of determinism and freewill and get on with the real question of how we actually make decisions was fairly clear. My emotional reaction to compatibilism is that it is a bit of a cope out. It attacks neither determinism or freewill and does not ask the scientific based question, which is the one that can in future be answered. But I certainly think it is an improvement over the old, old argument.

comment by wedrifid · 2010-08-09T14:31:12.702Z · LW(p) · GW(p)

I would like to see reference to "dissolving the question".

Replies from: JanetK
comment by JanetK · 2010-08-09T17:24:52.611Z · LW(p) · GW(p)

Thanks

comment by thomblake · 2010-08-09T14:53:02.159Z · LW(p) · GW(p)

To predict anything really sizable, like for instance, how the earth came to be as it is, or even how little-old-me became what I am, or even why I did a particular thing a moment ago, would take more resources and time than can be found in the life of our universe.

Surely if the universe is deterministic, "the resources and time [that] can be found in the life of the universe" provides an upper bound on what you need to predict anything. Since, after all, the universe is 'predicting' it just fine without exceeding those requirements.

Replies from: JanetK
comment by JanetK · 2010-08-09T17:30:42.036Z · LW(p) · GW(p)

Yes, your right. I should put close to the life of the universe.

comment by Dagon · 2010-08-09T13:19:16.341Z · LW(p) · GW(p)

There's plenty of posts to be written on this topic, but this one needs a bit more work. References to the sequences, especially the Reductionism and Mind Projection Fallacy sections and stating what parts you disagree with or are expanding upon is a good idea.

More specific problems with the post:

  • I don't think "a robust consensus answer" is what I'm hoping for - I'd rather have actual truth (though I appreciate when truth and consensus converge, it can take awhile).

  • I like the rock analogy a lot, but you don't go far enough. We remove the rock after it's fallen, and we take steps to prevent it (or others) falling there again. Why wouldn't we take the same approach to humans? Remove those that can't be altered not to offend.

  • Questions about individual vs group do not have "no good answer". "Shut up and multiply" is a pretty good answer to the vast majority of them (though I wish Eliezer had said "calculate" instead, as "multiply" sends my brain to species reproduction first before I remember the context). Recognizing the difference between the outcome you want and the signal you want to send is often necessary as part of the calculation of course.

  • We need to start thinking of ourselves as whole beings, one entity from head to toe: brain and body, past and future, from birth to death. Wow, no. The tone is a bit condescending, but that's fixable. Much worse, it's simply not right, or at least very hard to justify. It may well be that what I call "me" is a set of distinct and shifting subentities, currently implemented in brain and body, with partial-at-best continuity from well before birth to a future beyond what will likely be called death.

Replies from: JanetK
comment by JanetK · 2010-08-09T13:51:51.542Z · LW(p) · GW(p)

I read Reductionism and Mind Projection Fallacy some time ago. I liked them and I don't think that what I am saying here disagreed or expanded on those pieces. I will read them again to see if I now feel differently about it. Perhaps I need to make it clearer but the section on responsibility, morality and identity is not meant to say much about these issues other than that there is little reason to think that our society is going to be damaged by what science may say in the future about decision making.

comment by KrisC · 2010-08-09T10:05:45.446Z · LW(p) · GW(p)

The illusion of free will is an artifact of the incomplete knowledge of the mind's knowledge of the brain. It is not practical for an organism to evolve a brain to take be aware of its own functioning on the physical level of the decision making. An accurate simulation of a mind need not rely on a brain at all. We hope.

While we can say that actions are a result of purely physical processes, it is necessary to create abstract models of other people's projected actions in order to influence them. In recent years we have developed electromagnetic methods of overriding volition, but that approach is surely less efficient than persuasion.

My point is that neurology and psychology are different disciplines, and while they do overlap they do not need to converge but instead to conspire.

As for the moral thread of the post, rationally applied behavior modification ought take both into account. I am not sure what societal responses the post would suggest be changed, but I do believe that some moral truths are self-evident. {I am strongly opposed to destroying complexity without cause.}

I hope that within my lifetime mind will be more easily separate from brain through simulation and emulation. I suspect that new environments will be created in which action without consequence will be possible. I look forward to new means of communication which will muddle questions of identity and uniqueness (in a good way).

I would rather people look upon their bodies and brains as temporary vehicles for consciousness, and that they would be encouraged to find replacements as soon as possible. The inevitability of death is just a meme.

Replies from: JanetK
comment by JanetK · 2010-08-09T17:41:29.118Z · LW(p) · GW(p)

I assume that neurology and psychology will converge or one of them will be forgotten by the way side. I don't see how bodies and brains can be temporary vehicles for consciousness. If you mean that the state of our brains can be transferred to a machine in some sort of readout and that machine is capable of consciousness - then that machine becomes our brain/body replacement. A disembodied consciousness is something I cannot imagine.

Replies from: wedrifid, KrisC
comment by wedrifid · 2010-08-09T17:59:49.294Z · LW(p) · GW(p)

I assume that neurology and psychology will converge or one of them will be forgotten by the way side.

Could psychology not be found to be still useful when considering human behaviors at a different level of abstraction (or, indeed, with different forms of experimentation)?

Replies from: JanetK
comment by JanetK · 2010-08-10T07:18:42.989Z · LW(p) · GW(p)

Yes, of course, psychology and neurology could exist together using different levels of abstraction and different methods - like physics and chemistry. But if they disagree on fundamentals and cannot converge then I don't think they can stay that way for long.

comment by KrisC · 2010-08-09T18:12:35.164Z · LW(p) · GW(p)

Agreed concerning the need for a processing platform. Not so sure about the convergence of psych and neuro for same reason. If the same psych rules can apply to a consciousness regardless of platform, then neurology not applicable in that case.

comment by sereboi · 2010-08-11T05:30:12.316Z · LW(p) · GW(p)

@ orthonormal

you said- I agree. But I think that there is actually some feature of the (deterministic) act of choosing which leads a person to falsely believe that their choice is nondeterministic, and that by analyzing this we learn something interesting and important about cognition.

Very true. so what do you make of reconciling the two? Do we castigate them both in hopes of finding something out that is hiding in the shadows? The nexus of the matter is "belief" and in order to have a sound belief one should know as many facts about the subject as possible. I listened to a long discourse given by Dennet who is a avid compatibilist, he presented an extremely weak argument with nothing to back up his claims, now when i read "the illusion of free will" by wegner its nothing but proof.

Now of course we can poke holes all day in theories derived from test studies. But by what else can we as humans deduct solid reasoning if we don't take what evidence is available to us. To me discussing this topic is not about fascination, its about getting the truth. I might be that crazy to think it's available.

Replies from: ata, orthonormal
comment by ata · 2010-08-11T05:32:44.236Z · LW(p) · GW(p)

Please post your comments as replies (click "Reply" on the comment you're responding to) instead of posting them as top-level comments to the post.

comment by orthonormal · 2010-08-12T16:20:14.190Z · LW(p) · GW(p)

Hey, I actually meant for the conversation to move to the post you were quoting earlier. Here's my reply to you.

comment by sereboi · 2010-08-11T05:18:54.991Z · LW(p) · GW(p)

@Thomblake sorry about the message thing. Im still getting used to how this site works..

You substantiate analogies with proof. Basically im saying that your analogies don't hold water perhaps i'm using confusing vernacular.

Let me say one thing before moving on. I hate debating just to debate, for me when i involve myself in a debate it is to gain more insight. So i am totally open to your point of view if it sheds some light on this subject, the bottom line is if someone has a solid angle that i'm missing than i welcome it.

Ok that being said. it sounds like your actually mostly agreeing with me.

You do however trail off with more questions. Like

"If the things we perceive as "choices" are "not really choices", then what is really a choice? What do we mean by "choice"? The problem i have is that if you hold a firm position on compatibilism then you should be able to explain it to a laymen by using real proof.

My question to you is how is proof that our "will" is not really controlled by us as ideal conscious agents irrelevant?

It's absolutely relevant.

"free will" then becomes some untestable enity that is open to all kinds of conjecture and speculation. Reason and philosophy but they can only go so far when answering real life questions. So i stack up the data the best i can and make an intelligent decision based on those facts and my own empirical life evidence that i have lived through, but i will stay out of personalizing the problem.

Please just show me a shred of evidence, supporting the fact that we have real control over our subconscious minds in order to make choices freely.

Thanks.

Replies from: ata
comment by ata · 2010-08-11T05:23:57.887Z · LW(p) · GW(p)

Please just show me a shred of evidence, supporting the fact that we have real control over our subconscious minds in order to make choices freely.

Under your definition of free will, then what observations, if true, would be evidence for its existence? That is, what would free will (as you understand it) actually imply about empirical reality, and what would its absence imply?

Replies from: sereboi
comment by sereboi · 2010-08-11T05:40:52.411Z · LW(p) · GW(p)

Any shown tangible research that an agent can manipulate and control with little effort their subconscious mind.

The presence of, would imply a host of things from complete agent responsibility in all areas of life.

The absence of it would not only imply severed liability but also complete meaninglessness.

Most branches of existential philosophy solve meaninglessness by stating one has control over their choices and so creating meaning, If one is stripped of that control than meaninglessness truly abounds.

Of course that is unless one believes in God.

Replies from: wedrifid
comment by wedrifid · 2010-08-11T05:51:20.208Z · LW(p) · GW(p)

Any shown tangible research that an agent can manipulate and control with little effort their subconscious mind.

It's that "little effort" part that makes this an entirely different question. I don't use the term myself but "Free Will" is not always used to imply that things are easy.

Replies from: sereboi
comment by sereboi · 2010-08-11T06:17:26.241Z · LW(p) · GW(p)

The reason i said "little effort" is to clarify that one could possibly with much concentration have an effect on the subconscious, However the kind of effect im concerned with is the act of everyday choices that happen in nano seconds. I would welcome some data on "much effort" effects as well.

Replies from: wedrifid
comment by wedrifid · 2010-08-11T06:35:28.747Z · LW(p) · GW(p)

I understand what you are trying to do, and suspect I even approximately agree with you regarding predictions about just how relevant our conscious thought is to our decision making. I just note that this is a different question to the one you were arguing against.

I would welcome some data on "much effort" effects as well.

People sign themselves up for rehab. Occasionally it works.

comment by sereboi · 2010-08-10T22:32:07.945Z · LW(p) · GW(p)

Ok i finally get the etiquette thing of this system. :)

Sorry i am a straight shooter. I will work on my wording, however i still stand by my claims of conjecture vrs facts.

comment by WrongBot · 2010-08-09T17:05:06.837Z · LW(p) · GW(p)

I think of moral questions as those for which there is no good answer. All courses of action and of inaction are bad in a moral question. Often because the possible answers pit the good of the individual against the good of the group, but also pit different groups and their interests against each other.

What do groups have to do with anything? They don't make decisions, people do. If a particular individual is a consequentialist, they should take whichever action is expected to produce the most utility. The truth or usefulness of determinism and free will might influence how we think about assigning moral blame or praise, but they don't tell us what we should do.

comment by Peterdjones · 2011-04-22T13:14:12.227Z · LW(p) · GW(p)

There are two false assumptions in the above: 1) that the universe runs on physical laws does not mean it necessarily runs on deterministic laws.

2) Following from that, since laws are not necessarily deterministic, libertarian free will, does not necessarily involve overriding them. Libertarian free will could be found within an indeterministic (but otherwise throughly physical and material) universe.

Replies from: CuSithBell, Manfred
comment by CuSithBell · 2011-04-22T14:02:02.430Z · LW(p) · GW(p)

My understanding is that the standard dilemma for libertarian free will is that your decisions seem to have to ground out in randomness or determinism, so I don't think indeterministic laws save the concept.

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T14:35:29.549Z · LW(p) · GW(p)

That is the standard objection and I (unusually) think it can be resisted. To say the least, if you are going to claim to have "the" answer", you have to thoroughly consider all the alternatives.

Replies from: CuSithBell
comment by CuSithBell · 2011-04-22T15:23:06.481Z · LW(p) · GW(p)

I'd think that, given that's the standard objection, and it includes the case of indeterminism, you'd want to say more than just that indeterminism saves libertarian free will.

More to the point - would you mind giving a definition of what it is that you mean by 'libertarian free will'? I've never heard it coherently stated.

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T15:34:25.308Z · LW(p) · GW(p)

"Free Will is defined as "the power or ability to rationally choose and consciously perform actions, at least some of which are not brought about necessarily and inevitably by external circumstances".

Replies from: CuSithBell
comment by CuSithBell · 2011-04-22T17:01:16.910Z · LW(p) · GW(p)

Oh. Well, that's fine then. I usually think of libertarian free will as including internal circumstances as well.

comment by Manfred · 2011-04-22T13:57:15.899Z · LW(p) · GW(p)

The Copernican principle, "humans are not the center of the universe," does contradict 2, though, if you agree that ordinary randomness, e.g. measuring an electron, does not have free will. And the Copernican principle is just a restatement of Occam's razor when the competing explanations are "there is a universal physical law" and "there is a law that specifically targets humans."

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T14:38:44.700Z · LW(p) · GW(p)

I do not see what you mean by the Copernican Principle. Perhaps you imagine that someone has said only humans have FW. I have not.,

A naturalistic libertarian can concede that indeterministic electrons don't have free will, just as a compatibilist can concede that deterministic electrons don't have FW. Neither thinks (in)determinism is a sufficient condition of FW.

Replies from: Manfred, Manfred
comment by Manfred · 2011-04-22T19:51:52.216Z · LW(p) · GW(p)

True, but I am saying that if randomness is not enough to have free will (does a nondeterministic chinese room have free will?), then you would either need to replicate a compatibilist argument for how humans have free will, or have some extra laws that specify high-level concepts like free will (a.k.a. "magic").

Replies from: Peterdjones
comment by Peterdjones · 2011-04-22T20:09:00.180Z · LW(p) · GW(p)

No. I need an incompatibilist argument. I need randomness plus something to be necessary for FW, and I need the something extra to be naturalistic. And I have them, too.

A non deterministic CR, or other AI, could have FW, if programmed correctly. That's a consequence of naturalism.

Replies from: Manfred
comment by Manfred · 2011-04-22T21:53:23.331Z · LW(p) · GW(p)

Huh, I accidentally posted this. I thought I'd deleted it as true but irrelevant.

comment by Manfred · 2011-04-22T20:04:07.294Z · LW(p) · GW(p)

Ah, yeah, I was wrong.

comment by Lightwave · 2010-08-09T09:26:40.387Z · LW(p) · GW(p)

We cannot just predict what the outcomes of our decisions will be, we really, really have to go through the whole process of making them. We cannot even pretend that decisions are determined until after we have finish making them.

And what about an AI that can predict it's own decisions (because it knows its source code)?

Also, are you a compatibilist?

Replies from: JanetK, Unknowns
comment by JanetK · 2010-08-09T13:16:45.206Z · LW(p) · GW(p)

I believe that a compatibilist can accept both freewill and determinism at the same time. I reject them both as not useful to understanding decisions. I think there is a difference between believing both A and B and believing neither A or B. It seems to me unlikely that an AI could predict its own decisions by examining its source code but not running the code. But I am not sure it is completely impossible just because I cannot see how it would be done. If it were possible I would be extremely surprised if it was faster or easier that just running the code.

comment by Unknowns · 2010-08-09T09:31:42.545Z · LW(p) · GW(p)

As I've stated before, no AI can predict its own decisions in that sense (i.e. in detail, before it has made them.) Knowing its source code doesn't help; it has to run the code in order to know what result it gets.

Replies from: wedrifid, thomblake
comment by wedrifid · 2010-08-09T14:30:33.914Z · LW(p) · GW(p)

As I've stated before, no AI can predict its own decisions in that sense (i.e. in detail, before it has made them.)

I suggest that it can but it is totally pointless for it to do so.

Knowing its source code doesn't help; it has to run the code in order to know what result it gets.

Things can be proved from source code without running it. This applies to any source code, including that of oneself. Again, it doesn't seem a particularly useful thing to do in most cases.

Replies from: Emile
comment by Emile · 2010-08-09T14:53:22.901Z · LW(p) · GW(p)

I'm wondering why this got downvoted - it's true!

For example if the top-level decision function of an AI is:

def DecideWhatToDo(self, environment):
    if environment.IsUnderWater():
        return actions.SELF_DESTRUCT
    else:
        return self.EmergentComplexStochasticDecisionFunction(environment)

... and the AI doesn't self-modify, then it can predict that it will decide to self destruct if it falls in the water, only by analysing the code, without running it (also assuming, of course, that it is good enough at code analysis).

Of course, you can imagine AIs that can't predict any of it's decisions, and as wedfrid says, in most non-trivial cases, most probably wouldn't be able to.

(This may be important, because having provable decisions in certain situations could be key to cooperation in prisonner's-dilemma-type situations)

Replies from: Unknowns
comment by Unknowns · 2010-08-09T15:15:07.244Z · LW(p) · GW(p)

Of course that is predictable, but that code wouldn't exist in any intelligent program, or at least it isn't an intelligent action; predicting it is like predicting that I'll die if my brain is crushed.

Replies from: JoshuaZ, Emile, thomblake
comment by JoshuaZ · 2010-08-09T16:30:15.869Z · LW(p) · GW(p)

Unknowns, we've been over this issue before. You don't need to engage in perfect prediction in order to be able to usefully predict. Moreover, even if you can't predict everything you can still examine and improve specific modules. For example, if an AI has a module for factoring integers using a naive, brute-force factoring algorithm, it could examine that and decide to replace it with a quicker, more efficient module for factoring (that maybe used the number field sieve for example). It can do that even though it can't predict the precise behavior of the module without running it.

Replies from: Unknowns
comment by Unknowns · 2010-08-09T16:31:34.824Z · LW(p) · GW(p)

I certainly agree that an AI can predict some aspects of its behavior.

comment by Emile · 2010-08-09T15:54:28.410Z · LW(p) · GW(p)

That's also because this is a simplified example, merely intended to provide a counter-example to your original assertion.

As I've stated before, no AI can predict its own decisions in that sense (i.e. in detail, before it has made them.) Knowing its source code doesn't help; it has to run the code in order to know what result it gets.

Agreed, it isn't an intelligent action, but if you start saying intelligent agents can only take intelligent decisions, then you're playing No True Scotsman.

I can imagine plenty of situations where someone might want to design an agent that takes certain unintelligent decisions in certain circumstances, or an agent that self-modifies in that way. If an agent can not only make promises, but also formally prove by showing it's own source code that those promises are binding and that it can't change them - then it may be at an advantage for negociations and cooperation over an agent that can't do that.

So "stupid" decisions that can be predicted by reading one's own source code isn't a feature that I consider unlikely in the design-space of AIs.

Replies from: Unknowns
comment by Unknowns · 2010-08-09T16:05:03.077Z · LW(p) · GW(p)

I would agree with that. But I would just say that the AI would experience doing those things (for example keeping such promises) as we experience reflex actions, not as decisions.

comment by thomblake · 2010-08-09T15:56:20.594Z · LW(p) · GW(p)

but that code wouldn't exist in any intelligent program or at least it isn't an intelligent action

Why not?

predicting it is like predicting that I'll die if my brain is crushed.

In what way is it like that, and how is that relevant to the question?

Replies from: Unknowns
comment by Unknowns · 2010-08-09T15:59:00.474Z · LW(p) · GW(p)

It's like that precisely because it is easily predictable; as I said in another reply, an AI will experience its decisions as indeterminate, so anything it knows in advance in such a determinate way, will not be understood as a decision, just as I don't decide to die if my brain is crushed, but I know that will happen. In the same way the AI will merely know that it will self-destruct if it is placed under water.

Replies from: thomblake
comment by thomblake · 2010-08-09T16:12:36.062Z · LW(p) · GW(p)

From this, it seems like your argument for why this will not appear in its decision algorithm, is simply that you have a specific definition for "decision" that requires the AI to "understand it as a decision". I don't know why the AI has to experience its decisions as indeterminate (indeed, that seems like a flawed design if its decisions are actually determined!).

Rather, any code that leads from inputs to a decision should be called part of the AI's 'decision algorithm' regardless of how it 'feels'. I don't have a problem with an AI 'merely knowing' that it will make a certain decision. (and be careful - 'merely' is an imprecise weasel word)

Replies from: Unknowns
comment by Unknowns · 2010-08-09T16:26:07.067Z · LW(p) · GW(p)

It isn't a flawed design because when you start running the program, it has to analyze the results of different possible actions. Yes, it is determined objectively, but it has to consider several options as possible actions nonetheless.

comment by thomblake · 2010-08-09T14:49:07.030Z · LW(p) · GW(p)

Knowing its source code doesn't help; it has to run the code in order to know what result it gets.

This is false for some algorithms, and so I imagine it would be false for the entirety of the AI's source code. For example (ANSI C):

int i;
for (i=0; i<5; i++) ;

I know that i is equal to 5 after this code is executed, and I know that without executing the code in any sense.

Replies from: MatthewB, Unknowns
comment by MatthewB · 2010-08-09T15:14:16.189Z · LW(p) · GW(p)

Now, I am not certain about this, but we have to examine that code before we know it's outcome.

While this isn't "Running" the code in the traditional sense of computation as we are familiar with it today, it does seem that the code is sort of run by our brains as a simulation as we scan it.

As sort of meta-process if you will...

I could be so wrong about that though... eh...

Also, that code is useless really, except maybe as a wait function... It doesn't really do anything (Not sure why Unknowns gets voted up in the first post above, and down below)...

Also, leaping from some code to the Entirety of an AI's source code seems to be a rather large leap.

Replies from: thomblake
comment by thomblake · 2010-08-09T15:19:50.610Z · LW(p) · GW(p)

Also, leaping from some code to the Entirety of an AI's source code seems to be a rather large leap.

"some code" is part of "the entirety of an AI's source code" - if it doesn't need to execute some part of the code, then it doesn't need to execute the entirety of the code.

comment by Unknowns · 2010-08-09T14:51:44.830Z · LW(p) · GW(p)

That isn't an algorithm for making decisions.

Replies from: wedrifid, thomblake
comment by wedrifid · 2010-08-09T15:17:12.949Z · LW(p) · GW(p)

That isn't an algorithm for making decisions.

No, but note the text:

This is false for some algorithms, and so I imagine it would be false for the entirety of the AI's source code.

It is, incidently, trivial to alter the code to an algorithm for making decisions and also simple to make it an algorithm that can predict it's decision before making it.

do_self_analysis();
unsigned long i;
unsigned long j;
for (i=0; i<ULONG_MAX-1; i++)
    for (j=0; j<ULONG_MAX-1; j++);
if(i > 2) return ACTION_DEFECT;
return ACTION_COOPERATE;

The do_self_analysis method (do they call them methods or functions? Too long since I've used C) can browse the entire source code of the AI, determine that the above piece of code is the algorithm for making the relevant decision, prove that do_self_analysis doesn't change anything or perform any output and does return in finite time and then go on to predict that the AI will behave like a really inefficient defection rock. Quite a while later it will actually make the decision to defect.

All rather pointless but the concept is proved.

Replies from: Unknowns
comment by Unknowns · 2010-08-09T15:26:37.031Z · LW(p) · GW(p)

When the AI runs the code for predicting it's action, it will have the subjective experience of making the decision. Later "it will actually make the decision to defect" only in the sense that the external result will come at that time. If you ask it when it made it's decision, it will point to the time when it analyzed the code.

Replies from: wedrifid
comment by wedrifid · 2010-08-09T15:31:34.899Z · LW(p) · GW(p)

You are mistaken. I consider the explanations given thus far by myself and others sufficient. (No disrespect intended beyond that implicit in the fact of disagreement itself and I did not vote on the parent.)

Replies from: Unknowns
comment by Unknowns · 2010-08-09T15:33:29.017Z · LW(p) · GW(p)

The explanations given say nothing about the AI's subjective experience, so they can't be sufficient to refute my claim about that.

Replies from: wedrifid
comment by wedrifid · 2010-08-09T15:55:52.034Z · LW(p) · GW(p)

Consider my reply to be to the claim:

If you ask it when it made it's decision, it will point to the time when it analyzed the code.

If you ask the AI when it made its decision it will either point to the time after the analysis or it will be wrong.

I avoided commenting on the 'subjective experience' side of things because I thought it was embodying a whole different kind of confusion. It assumes that the AI executes some kind of 'subjective experience' reasoning that is similar to that of humans (or some subset thereof). This quirk relies on lacking any strong boundaries between thought processes. People usually can't predict their decisions without making them. For both the general case and the specific case of the code I gave a correctly implemented module that could be given the label 'subjective experience' would see the difference between prediction and analysis.

I upvoted the parent for the use of it's. I usually force myself to write its in that context but cringe while doing so. The syntax of the English language is annoying.

Replies from: thomblake, Unknowns
comment by thomblake · 2010-08-09T16:51:50.518Z · LW(p) · GW(p)

I upvoted the parent for the use of it's. I usually force myself to write its in that context but cringe while doing so. The syntax of the English language is annoying.

Really? Do you also cringe when using theirs, yours, ours, mine, and thine?

Replies from: wedrifid
comment by wedrifid · 2010-08-09T17:16:05.008Z · LW(p) · GW(p)

Mine and thine? They don't belong in the category. The flaw isn't that all words about possession should have an apostrophe. The awkwardness is that the pattern of adding the "s" to the end to indicate ownership is the same from "Fred's" to "its" but arbitrarily not punctuated in the same way. The (somewhat obsolete) "ine" is a distinct mechanism of creating a possessive pronoun which while adding complexity at least doesn't add inconsistency.

As for "theirs, yours and ours", they prompt cringes in decreasing order of strength (in fact, it may not be a coincidence that you asked in that order). Prepend "hers" to the list and append "his". "Hers" and "theirs" feel more cringe-worthy, as best as I can judge, because they are closer in usage to "Fred's" while "ours" is at least a step or two away. "His" is a special case in as much as it is a whole different word. It isn't a different mechanism like "thine" or "thy" but it isn't "hes" either. I have never accidentally typed "hi's".

Replies from: thomblake
comment by thomblake · 2010-08-09T17:19:32.007Z · LW(p) · GW(p)

You're just reading the wrong pattern. There are simple, consistent rules:

  1. When making a noun possessive, EDIT: add 's use the appropriate possessive form with an apostrophe
  2. When making a pronoun possessive, use the appropriate possessive pronoun (none of which have an apostrophe)

EDIT: Leaving out " Jesus' " for the moment...

Replies from: wedrifid, NancyLebovitz
comment by wedrifid · 2010-08-09T17:51:09.260Z · LW(p) · GW(p)

You're just reading the wrong pattern.

No, I'm not reading the wrong pattern. I'm criticising the pattern in terms of the objective and emotional-subjective criteria that I use for evaluating elements of languages and communication patterns in general. I am aware of the rules in question and more than capable of implementing it and the hundreds of other rules that go into making our language.

The undesirable aspect of this part of the language is this: It is not even remotely coincidental that we add the "ss" sound to the end of a noun to make it possessive and that most modern possessive pronouns are just the pronoun with a "ss" sound at the end. Nevertheless, the rule is "use the appropriate possessive pronoun"... that's a bleeding lookup table! A lookup table for something that is nearly always an algorithmic modification is not something I like in a language design. More importantly, when it comes to the spoken word the rule for making *nouns possessive is "almost always add 'ss'". 'Always' is better than 'almost always' (but too much to ask). Given 'almost always' , the same kind of rule for converting them all to written form would be far superior.

According to subjectively-objective criteria, this feature of English sucks. If nothing else it would be fair to say that my 'subjective' is at least not entirely arbitrary, whether or not you share the same values with respect to language.

Replies from: thomblake
comment by thomblake · 2010-08-09T18:08:15.581Z · LW(p) · GW(p)

Yes, this is definitely a difference in how we perceive the language. I don't see any inherent problem with a lookup table in the language, given that most of the language is already lookup tables in the same sense (what distinguishes 'couch' from 'chair', for instance). And it would not occur to me to have a rule for "*nouns" rather than the actual separate rules for nouns and pronouns. Note also that pronouns have possessive adjective and possessive pronoun forms, while nouns do not. They're an entirely different sort of animal.

So I would not think to write "It's brand is whichever brand is it's" instead of "its brand is whichever brand is its" anymore than I would think to write "me's brand is whichever brand is me's" (or whatever) instead of "my brand is whichever brand is mine"

Replies from: wedrifid
comment by wedrifid · 2010-08-09T18:45:23.019Z · LW(p) · GW(p)

Yes, this is definitely a difference in how we perceive the language.

I suspect the difference extends down to the nature of our thought processes. Let me see... using Myers-Briggs terminology and from just this conversation I'm going to guess ?STJ.

Replies from: thomblake
comment by thomblake · 2010-08-09T19:57:51.992Z · LW(p) · GW(p)

I tend to test as INTP/INTJ depending, I think, on whether I've been doing ethics lately. But then, I'm pretty sure it's been shown that inasmuch as that model has any predictive power, it needs to be evaluated in context... so who knows about today.

comment by NancyLebovitz · 2010-08-09T17:21:26.288Z · LW(p) · GW(p)

There's one more rule-- if the noun you're making possessive ends with an s (this applies to both singular and plural nouns), just add an apostrophe.

Replies from: thomblake
comment by thomblake · 2010-08-09T17:22:47.696Z · LW(p) · GW(p)

That's not exactly true, and I didn't think it had terribly much bearing to my point on account of we're talking about pronouns, but I'll amend the parent.

Replies from: dclayh
comment by dclayh · 2010-08-09T20:12:15.885Z · LW(p) · GW(p)

That's not exactly true

Indeed, and while we're on the subject of idiolects: my preference is for the spelling to follow the pronunciation. Hence either "Charles's tie" or "Charles' tie" is correct, depending on how you want it to be pronounced (in this case I usually prefer the latter option, but the meter of the sentence may sometimes make the other a better choice).

comment by Unknowns · 2010-08-09T16:02:50.307Z · LW(p) · GW(p)

"If you ask the AI when it made its decision it will either point to the time after the analysis or it will be wrong."

I use "decision" precisely to refer the experience that we have when we make a decision, and this experience has no mathematical definition. So you may believe yourself right about this, but you don't have (and can't have) any mathematical proof of it.

(I corrected this comment so that it says "mathematical proof" instead of proof in general.)

Replies from: Emile, thomblake, wedrifid, wedrifid
comment by Emile · 2010-08-09T16:14:33.674Z · LW(p) · GW(p)

I think most people on LessWrong are using "decision" in the sense used in Decision Theory.

Making a claim, and then, when given counter-arguments, claiming that one was using an exotic definition seems close to logical rudeness to me.

Replies from: wedrifid, Unknowns
comment by wedrifid · 2010-08-09T16:51:11.812Z · LW(p) · GW(p)

Making a claim, and then, when given counter-arguments, claiming that one was using an exotic definition seems close to logical rudeness to me.

It also does his initial position a disservice. Rereading the original claim with the professed intended meaning changes it from "not quite technical true" to, basically, nonsense (at least in as much as it claims to pertain to AIs).

comment by Unknowns · 2010-08-09T16:22:00.620Z · LW(p) · GW(p)

I don't think my definition is either exotic or inconsistent with the sense used in decision theory.

Replies from: wedrifid
comment by wedrifid · 2010-08-09T16:53:52.418Z · LW(p) · GW(p)

I don't think my definition is ... inconsistent with the sense used in decision theory.

You defined decision as a mathematical undefinable experience and suggested that it cannot be subject to proofs. That isn't even remotely compatible with the sense used in decision theory.

Replies from: Unknowns
comment by Unknowns · 2010-08-09T16:56:06.177Z · LW(p) · GW(p)

It is compatible with it as an addition to it; the mathematics of decision theory does not have decisions happening at particular moments in time, but it consistent with decision theory to recognize that in real life, decisions do happen at particular moments.

comment by thomblake · 2010-08-09T16:15:21.660Z · LW(p) · GW(p)

So you may believe yourself right about this, but you don't have (and can't have) any proof of it.

If you believe that we can't have any proof of it, then you're wasting our time with arguments.

Replies from: Unknowns
comment by Unknowns · 2010-08-09T16:20:36.497Z · LW(p) · GW(p)

You might have a proof of it, but not a mathematical proof.

Also note that your comment that I would be "wasting our time" implies that you think that you couldn't be wrong.

comment by wedrifid · 2010-08-09T16:38:16.604Z · LW(p) · GW(p)

How many legs does an animal have if I call a tail a leg and believe all animals are quadrupeds?

comment by wedrifid · 2010-08-09T16:23:43.759Z · LW(p) · GW(p)

How many legs does a dog have if I call a tail a leg?

comment by thomblake · 2010-08-09T15:18:50.066Z · LW(p) · GW(p)

No, but surely some chunks of similarly-transparent code would appear in an algorithm for making decisions. And since I can read that code and know what it outputs without executing it, surely a superintelligence could read more complex code and know what it outputs without executing it. So it is patently false that in principle the AI will not be able to know the output of the algorithm without executing it.

Replies from: Unknowns
comment by Unknowns · 2010-08-09T15:27:51.955Z · LW(p) · GW(p)

Any chunk of transparent code won't be the code for making an intelligent decision. And the decision algorithm as a whole won't be transparent to the same intelligence, but perhaps only to something still more intelligent.

Replies from: thomblake
comment by thomblake · 2010-08-09T15:41:40.850Z · LW(p) · GW(p)

Any chunk of transparent code won't be the code for making an intelligent decision.

Do you have a proof of this statement? If so, I will accept that it is not in principle possible for an AI to predict what its decision algorithm will return without executing it.

Of course, logical proof isn't entirely necessary when you're dealing with Bayesians, so I'd also like to see any evidence that you have that favors this statement, even if it doesn't add up to a proof.

Replies from: Unknowns
comment by Unknowns · 2010-08-09T15:54:53.420Z · LW(p) · GW(p)

It's not possible to prove the statement because we have no mathematical definition of intelligence.

Eliezer claims that it is possible to create a superintelligent AI which is not conscious. I disagree with this because it is basically saying that zombies are possible. True, he would say that he only believes that human zombies are impossible, not that zombie intelligences in general are impossible. But in that case he has no idea whatsoever what consciousness corresponds to in the physical world, and in fact has no reason not to accept dualism.

My position is more consistent: all zombies are impossible, and any intelligent being will be conscious. So it will also have the subjective experience of making decisions. But it is essential to this experience that you don't know what you're going to do before you do it; when you experience knowing what you're going to do, you experience deciding to do it.

Therefore any AI that runs code capable of predicting its decisions, will at that very time subjectively experience making those decisions. And on the other hand, given that a block of code will not cause it to feel the sensation of deciding, that block of code must be incapable of predicting its decision algorithm.

You may still disagree, but please note that this is entirely consistent with everything you and wedrifid have argued, so his claim that I have been refuted is invalid.

Replies from: Randaly, LucasSloan, torekp, thomblake
comment by Randaly · 2010-08-09T16:52:35.546Z · LW(p) · GW(p)

As I recall, Eliezer's definition of consciousness is borrowed from GEB- it's when the mind examines itself, essentially. That has very real physical consequences, so the idea of non-conscious AGI doesn't support the idea of zombies, which require consciousness to have no physical effects.

Replies from: Unknowns
comment by Unknowns · 2010-08-09T16:57:54.576Z · LW(p) · GW(p)

Any AGI would be able to examine itself, so if that is the definition of consciousness, every intelligence would be conscious. But Eliezer denies the latter, so he also implicitly denies that definition of consciousness.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-09T17:02:20.116Z · LW(p) · GW(p)

Any AGI would be able to examine itself, so if that is the definition of consciousness, every intelligence would be conscious. But Eliezer denies the latter, so he also implicitly denies that definition of consciousness.

I'm not sure I am parsing correctly what you've wrote. It may rest with your use of the word "intelligence"- how are you defining that term?

Replies from: Unknowns
comment by Unknowns · 2010-08-09T17:03:31.046Z · LW(p) · GW(p)

You could replace it with "AI." Any AI can examine itself, so any AI will be conscious, if consciousness is or results from examining itself. I agree with this, but Eliezer does not.

comment by LucasSloan · 2010-08-10T08:49:16.433Z · LW(p) · GW(p)

we have no mathematical definition of intelligence.

Yes we do, ability to apply optimization pressure in a wide variety of environments. The platonic ideal of which is AIXI.

comment by torekp · 2010-08-10T01:43:46.311Z · LW(p) · GW(p)

Eliezer claims that it is possible to create a superintelligent AI which is not conscious.

Can you please provide a link?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-10T03:04:25.595Z · LW(p) · GW(p)

http://lesswrong.com/lw/x5/nonsentient_optimizers/

Replies from: torekp
comment by torekp · 2010-08-22T15:35:56.549Z · LW(p) · GW(p)

Thank you. I agree with Eliezer for reasons touched on in my comments to simplicio's Consciousness of simulations & uploads thread.

comment by thomblake · 2010-08-09T16:07:24.853Z · LW(p) · GW(p)

My position is more consistent: all zombies are impossible, and any intelligent being will be conscious. So it will also have the subjective experience of making decisions. But it is essential to this experience that you don't know what you're going to do before you do it; when you experience knowing what you're going to do, you experience deciding to do it.

Therefore any AI that runs code capable of predicting its decisions, will at that very time subjectively experience making those decisions. And on the other hand, given that a block of code will not cause it to feel the sensation of deciding, that block of code must be incapable of predicting its decision algorithm.

I don't have any problem granting that "any intelligent being will be conscious", nor that "It will have the subjective experience of making decisions", though that might just be because I don't have a formal specification of either of those - we might still be talking past each other there.

But it is essential to this experience that you don't know what you're going to do before you do it

I don't grant this. Can you elaborate?

when you experience knowing what you're going to do, you experience deciding to do it.

I'm not sure that's true, or in what sense it's true. I know that if someone offered me a million dollars for my shoes, I would happily sell them my shoes. Coming to that realization didn't feel to me like the subjective feeling of deciding to sell something to someone at the time, as compared to my recollection of past transactions.

Therefore any AI that runs code capable of predicting its decisions, will at that very time subjectively experience making those decisions.

Okay, that follows from the previous claim.

And on the other hand, given that a block of code will not cause it to feel the sensation of deciding, that block of code must be incapable of predicting its decision algorithm.

If I were moved to accept your previous claim, I would now be skeptical of the claim that "a block of code will not cause it to feel the sensation of deciding". Especially since we've already shown that some blocks of code would be capable of predicting some decision algorithms.

that block of code must be incapable of predicting its decision algorithm.

This follows, but I draw the inference in the opposite direction, as noted above.

Replies from: Unknowns
comment by Unknowns · 2010-08-09T16:19:59.021Z · LW(p) · GW(p)

I would distinguish between "choosing" and "deciding". When we say "I have some decisions to make," we also mean to say that we don't know yet what we're going to do.

On the other hand, it is sometimes possible for you to have several options open to you, and you already know which one you will "choose". Your example of the shoes and the million dollars is one such case; you could choose not to take the million dollars, but you would not, and you know this in advance.

Given this distinction, if you have a decision to make, as soon as you know what you will or would do, you will experience making a decision. For example, presumably there is some amount of money ($5? $20? $50? $100? $300?) that could be offered for your shoes such that you are unclear whether you should take the offer. As soon as you know what you would do, you will feel yourself "deciding" that "if I was offered this amount, I would take it." It isn't a decision to do something concretely, but it is still a decision.

comment by sereboi · 2010-08-10T19:01:03.407Z · LW(p) · GW(p)

Talk about creating a "strawman"!? This has got to be one of the worst articles ever written on the subject of determinism vrs free will. You postulate conjecture more freely than anyone i have ever seen. There is absolutely nothing here to back up your claims, sincerely is this a joke?

You want some scientific proof for Determinism? start with a book called "The Illusion Of Conscious Will" by Wegner. Its a 400 page book that is cover to cover scientific study on the fact that we don't really act freely. Test studies, in Labs!

You are a compatibilist.

And even though i agree the verdict is still out on Hard Determinism. Compatibilism along with free will is just straight up ludicrous. There is no real proof on either and when ever juxtapose-determinism wins over compatibilism every time by a land slide. People that are compatibilist are too smart to concede to having free will alone, but too scared to accept the hard truth that we are not truly in control. They want it both ways and will invent all kinds of ridiculous theories to support their belief.

Replies from: thomblake, orthonormal
comment by thomblake · 2010-08-10T19:32:33.990Z · LW(p) · GW(p)

this has got to be one of the worst articles ever written on the subject of determinism vrs free will.

Needless hyperbole. A 10-second Google search turned up worse, so I can't imagine you had any basis for this claim. Example. Saying false things to be insulting is not well-liked around here.

You want some scientific proof for Determinism? start with a book called "The Illusion Of Conscious Will" by Wegner. Its a 400 page book that is cover to cover scientific study on the fact that we don't really act freely. Test studies, in Labs!

Sadly, experimental studies are not likely the right place to answer this question. Once you have a clear enough idea of what you're talking about, the question goes away. We're "really in control" of our actions at least as much as the thermostat is "really in control" of the temperature, and if you don't think that counts, then you're using "control" in a very alien sense.

Perhaps try starting at the wiki entry on free will

comment by orthonormal · 2010-08-10T19:09:04.949Z · LW(p) · GW(p)

Sereboi, welcome to Less Wrong! Be sure and hit the welcome thread soon.

By the way, I notice you're getting downvotes on the comment (not by me); it's probably the tone rather than the content. Like any community, we have our norms of etiquette, and they're usually signaled by voting.

More substantively, I think you're answering the question without properly dissolving it, which is necessary in the case of something like free will that genuinely confuses many intelligent people. (I'm not supporting the current post against this comment, just pointing out that your comment by itself won't dispel people's confusion over free will.)

Replies from: sereboi, sereboi
comment by sereboi · 2010-08-10T21:57:52.790Z · LW(p) · GW(p)

well this is a lame system if i have ever seen one, all it takes is a few people to rally and boom your out, sounds like intellectual snobbery to me.

No matter, nothing will dispel peoples believes about whatever, i simply stated the case that this blog/article is nothing but conjecture, no hard evidence to back up what is being said. If your going to make an intelligent opinion about something, shouldn't you have some evidence to back it up?

I stated my position and gave information on where to get substantiated evidence on the subject and im voted out?

ludicrous.

comment by sereboi · 2010-08-10T22:12:29.395Z · LW(p) · GW(p)

This sounds like intellectual snobbery to me.

No matter, nothing will dispel peoples beliefs, i simply stated the case that this blog/article is nothing but conjecture, no hard evidence to back up what is being said. If your going to make an intelligent opinion about something, shouldn't you have some evidence to back it up?

I stated my position and gave information on where to get substantiated evidence on the subject and im voted out?

As for the article on "dissolving the problem" rather than trying to answer it, this is the very pompous cop out. By calling something meaningless and diffusing it you can do that with the very nature of philosophy ie. "Reason and rational thought is subjective and meaningless and i challenge anyone to truly define it."

do you see how absurd that is?

I agree that in a complex debate like this it is ignorant to claim that you have the ultimate truth, and i by no means do. In fact i said "the verdict is still out." But holding a position my not holding a position just seems like a trick. If you really want to get to the bottom of something you dig up facts not conjecture.

Im sorry i wont do a pompous intellectual 2-step with everyone, when discussing matters like this i prefer to get down to business, OPEN MINDED, but discussing facts if there are any.

Replies from: orthonormal
comment by orthonormal · 2010-08-10T22:30:36.502Z · LW(p) · GW(p)

Calm down! I'm not saying what you think I'm saying.

First off, I suggested you go and introduce yourself on the welcome thread in part because that's usually good for a few upvotes, and that avoids the annoying feature where people with negative karma can't comment more than once every 10 minutes or so. I think there should be a buffer, because getting a comment downvoted isn't such a rare or awful thing on LW, but in lieu of that it's worth it to make an effort to get some karma at first.

Again, the downvotes are more about style than substance. We don't run our arguments here the way they happen elsewhere on the Internet, and because of that we have fewer flamewars and more real arguments. If you want to discuss things with us, you might have to adopt a different style than usual. I know that's an asshole thing of me to say, but that's how it is.

Finally, I'm not at all arguing against determinism, and I'm not defending the current post. Determinism holds without exception, and the idea of free will as most people think of it is incoherent. However, just stopping there isn't actually sufficient: it remains to ask why we feel that we have free will— are those feelings an illusion, and if so, why do we have them in the first place, or are they a reflection of something that actually goes on within the deterministic mind, and if so, why do they feel to us like something incompatible with determinism?

Those questions are a lot more interesting, and don't just go away once one realizes that the universe is deterministic in nature.

Replies from: sereboi
comment by sereboi · 2010-08-10T23:02:29.930Z · LW(p) · GW(p)

thanks thanks for the information, honestly i got to this site cause i get e-mail alerts from google with anything about determinism. So when i read the article i thought it was some regular commentator. i had no idea that it was written by someone in a smaller community. That is why i was so harsh in my opening line...

Well now that i have a better understanding of what this site is about, if i make anymore comments i will word them a bit differently thanks again.

Replies from: prase
comment by prase · 2011-04-22T16:29:52.888Z · LW(p) · GW(p)

e-mail alerts from google with anything about determinism

Now I am curious. First, I didn't know that it is possible to be alerted this way, but mainly: why have you adopted this feature? There is surely a lot of determinism-themed crap around, you have to get tons of e-mail every day. Or am I mistaken about how it works?

Replies from: kpreid
comment by kpreid · 2011-04-22T19:32:18.244Z · LW(p) · GW(p)

First: Google Alerts.