Thou Art Physics
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-06T06:37:01.000Z · LW · GW · Legacy · 88 commentsContents
88 comments
Three months ago [LW · GW]—jeebers, has it really been that long?—I posed the following homework assignment: Do a stack trace of the human cognitive algorithms that produce debates about “free will.” Note that this task is strongly distinguished from arguing that free will does or does not exist.
Now, as expected, people are asking, “If the future is determined, how can our choices control it?” The wise reader can guess that it all adds up to normality [LW · GW]; but this leaves the question of how.
People hear: “The universe runs like clockwork; physics is deterministic; the future is fixed.” And their minds form a causal network that looks like this:
Here we see the causes “Me” and “Physics,” competing to determine the state of the “Future” effect. If the “Future” is fully determined by “Physics,” then obviously there is no room for it to be affected by “Me.”
This causal network is not an explicit philosophical belief. It’s implicit— a background representation of the brain, controlling which philosophical arguments seem “reasonable.” It just seems like the way things are.
Every now and then, another neuroscience press release appears, claiming that, because researchers used an fMRI to spot the brain doing something-or-other during a decision process, it’s not you who chooses, it’s your brain.
Likewise that old chestnut, “Reductionism undermines rationality itself. Because then, every time you said something, it wouldn’t be the result of reasoning about the evidence—it would be merely quarks bopping around.”
Of course the actual diagram should be:
Or better yet:
Why is this not obvious? Because there are many levels of organization [LW · GW] that separate our models of our thoughts—our emotions, our beliefs, our agonizing indecisions, and our final choices—from our models of electrons and quarks.
We can intuitively visualize that a hand is made of fingers (and thumb and palm). To ask whether it’s really our hand that picks something up, or merely our fingers, thumb, and palm, is transparently a wrong question [LW · GW].
But the gap between physics and cognition [LW · GW] cannot be crossed by direct visualization. No one can visualize atoms making up a person, the way they can see fingers making up a hand.
And so it requires constant vigilance to maintain your perception of yourself as an entity within physics.
This vigilance is one of the great keys to philosophy, like the Mind Projection Fallacy [LW · GW]. You will recall that it is this point which I nominated [LW · GW] as having tripped up the quantum physicists who failed to imagine macroscopic decoherence; they did not think to apply the laws to themselves.
Beliefs, desires, emotions, morals, goals, imaginations, anticipations, sensory perceptions, fleeting wishes, ideals, temptations… You might call this the “surface layer” of the mind, the parts-of-self that people can see even without science. If I say, “It is not you who determines the future, it is your desires, plans, and actions that determine the future,” you can readily see the part-whole relations. It is immediately visible, like fingers making up a hand. There are other part-whole relations all the way down to physics, but they are not immediately visible.
“Compatibilism” is the philosophical position that “free will” can be intuitively and satisfyingly defined in such a way as to be compatible with deterministic physics. “Incompatibilism” is the position that free will and determinism are incompatible.
My position might perhaps be called “Requiredism.” When agency, choice, control, and moral responsibility are cashed out in a sensible way, they require determinism—at least some patches of determinism within the universe. If you choose, and plan, and act, and bring some future into being, in accordance with your desire, then all this requires a lawful sort of reality; you cannot do it amid utter chaos. There must be order over at least those parts of reality that are being controlled by you. You are within physics, and so you/physics have determined the future. If it were not determined by physics, it could not be determined by you.
Or perhaps I should say, “If the future were not determined by reality, it could not be determined by you,” or “If the future were not determined by something, it could not be determined by you.” You don’t need neuroscience or physics to push naive definitions of free will into incoherence. If the mind were not embodied in the brain, it would be embodied in something else; there would be some real thing that was a mind. If the future were not determined by physics, it would be determined by something, some law, some order, some grand reality that included you within it.
But if the laws of physics control us, then how can we be said to control ourselves?
Turn it around: If the laws of physics did not control us, how could we possibly control ourselves?
How could thoughts judge other thoughts, how could emotions conflict with each other, how could one course of action appear best, how could we pass from uncertainty to certainty about our own plans, in the midst of utter chaos?
If we were not in reality, where could we be?
The future is determined by physics. What kind of physics? The kind of physics that includes the actions of human beings.
People’s choices are determined by physics. What kind of physics? The kind of physics that includes weighing decisions, considering possible outcomes, judging them, being tempted, following morals, rationalizing transgressions, trying to do better…
There is no point where a quark swoops in from Pluto and overrides all this.
The thoughts of your decision process are all real, they are all something. But a thought is too big and complicated to be an atom. So thoughts are made of smaller things [LW · GW], and our name for the stuff that stuff is made of is “physics.”
Physics underlies our decisions and includes our decisions. It does not explain them away [LW · GW].
Remember, physics adds up to normality [LW · GW]; it’s your cognitive algorithms that generate confusion [LW · GW]
88 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Caledonian2 · 2008-06-06T06:51:58.000Z · LW(p) · GW(p)
I'm not going to give away the whole answer in today's post,
When will you post the cognitive algorithms that cause you to believe that you know the whole answer?
comment by Marshall_Bolton · 2008-06-06T08:19:15.000Z · LW(p) · GW(p)
Caledonian: What about the Principle of Charity - Everbody is necessarily mostly right. This means of course, that we also sometimes are wrong.
comment by a._y._mous · 2008-06-06T09:10:23.000Z · LW(p) · GW(p)
Somehow, I get this nagging feeling that you use 'wrong question' as a, in your own words, 'stop word'. This series is leading upto 'can't say' and 'no answer' for any and all questions that you have explicitly started out to answer.
comment by Shane_Legg · 2008-06-06T11:10:10.000Z · LW(p) · GW(p)
@ a. y. mous
I think the whole "wrong question" business is easier to understand in a situation that is already well understood.
Ok, so we live in a flat world. What supports the world? A turtle! Ok, but now here's my question:
What supports the turtle?
The way to "answer" this question is not to try to answer it directly, e.g. another turtle. And it's not that the answer is unknowable or beyond human understanding. With a better understanding of reality the question itself goes away.
comment by spindizzy2 · 2008-06-06T11:30:02.000Z · LW(p) · GW(p)
I suggest there are 4 stages in the life-cycle of a didact:
(1) The belief that one's intellectual opponents can be won over by rationality. (2) The belief that one's intellectual opponents can be won over by rationality and emotional reassurance. (3) The belief that one's intellectual opponents can be won over without rationality. (4) The belief that one's intellectual opponents do not need to be won over.
I am not suggesting that any stage is superior to any other.
Eliezer, I declare that you are currently at stage (2), commonly known as the "Dawkins phase". :)
comment by a._y._mous · 2008-06-06T11:48:00.000Z · LW(p) · GW(p)
Shane, true. "What supports the world" (and the consequent support mechanisms for the turtles) is the 'wrong question', so to speak. But the question we set out to answer was "What shape is the world?". Not even "Is the world flat?". Glossing over the myriad of Eliezer's posts (I swear, I read them dilligently! Though I am neither a physicist, nor for that matter a reductionist), they add up to a pyramid of straw men.
Take this particular point of 'Me' vs. 'Physics'. The answer one takes away from this post is that I am a subset of Physics. Fair enough. Unfortunately, the question asked is not 'Me' vs. 'Physics'. The question is on free will. The answer to that, at least from what little I can glean from Eliezer's insightful writings, is supposed to be, "Free will? That is the wrong question. Since reality is physically and fundamentally probabalistic, there is a multititude of options in a given range whose endpoints are bound by the laws of Physics. And further since you are a part of that very reality and since you are also governed by the same probabilistic amplitudes, there is a computable and assignable number for the probability that each of the above options can be realised" which paraphrased means "no answer" or paraphrased differently can mean "Yup. You got a choice to take a chance. Odds ain't good though." Absolutely true, no doubt. But like I said, straw men. More importanlt, does not decidely negate or affirm the question "do I have free will"?
comment by Hopefully_Anonymous · 2008-06-06T12:06:16.000Z · LW(p) · GW(p)
clever, spindizzy.
comment by RobinHanson · 2008-06-06T12:25:48.000Z · LW(p) · GW(p)
I'm always bothered by the phrase "physical reality", as if there were some other reality. Minds are complex and so are made of smaller things. Things interact which causes them to change. When you decide to cause something to happen that must be via your parts interacting differently with the parts of those things. All this should be obvious. The trouble starts when people see it as obvious that mental things are not made of physical things.
comment by Shane_Legg · 2008-06-06T12:34:19.000Z · LW(p) · GW(p)
@ a. y. mous.
I don't see the straw man. In the classical sense "freewill" means that there is something outside of the system that is free to make decisions (at least this is my understanding of it). If you see yourself, your will, your decision making process and everything as all existing within the system and thus governed by physics, then that answers your question: in a classical sense the answer is no. There are many other ways to define "freewill", however, and under some of these definitions the answer to the question will be "yes". Thus, rather than focusing on whether the answer is "yes" or "no", you should first worry about what the question really means. Once you have straightened that out, your answer could be "yes", "no" or that your question no longer makes any sense, i.e. it is a "wrong question".
comment by Ben_Jones · 2008-06-06T12:57:00.000Z · LW(p) · GW(p)
Shane - really good demonstration of how wrong a question is really is.
The answer to that, at least from what little I can glean from Eliezer's insightful writings, is supposed to be, "Free will? That is the wrong question."
Well...yeah. It is a wrong question. I'd go so far as to call it the original wrong question. To borrow from Eliezer's language, if you know everything there is to know about physics and the brain, but you can still imagine asking 'but do I have free will?', you're onto a loser. 'Do I have free will' is not only wrong, it's wrong-headed and pretty much unanswerable for any useful purpose.
Neither is it a straw man - almost everyone I know would beat themselves up when confronted with the fallacious 'me vs. determinism' problem.
I'm sure Eliezer's continued writings on will and control in a timeless universe will open new boxes for me, and I look forward to that, but I seem to be one of the happy few that really doesn't see an issue when it comes to this. Brain gets input, brain does physics, brain gives output. Deterministic? Certainly. Is it 'me' making the decisions? Yeah, sure, why not?
comment by gordon_wrigley · 2008-06-06T13:15:53.000Z · LW(p) · GW(p)
I think the core problem when talking about freewill is that at some level the notion of freewill just by definition requires a system where the mind exists out side of physics and manipulates it. It's seems like that's what people really mean when they say freewill.
I'm not sure of a good way to explain my thoughts on this. Lets try it this way, imagine you had an AI computer program. And it really was genuine strong AI, and you were quite happy to assert that it was intelligent, self aware and sentient. It thinks, it learns, it loves, it hates, it has doubts and fears it is a full and complete artificial personality. Now given that would you even think to ask if it had freewill? It seems to me that you wouldn't, instead you'd say "it's a machine of course it doesn't have freewill, we can dig up the code that makes all of that stuff happen".
Now what is the difference between the machine and the person, well really all your left with is that the person has freewill and the machine doesn't. So freewill is that which makes us more than just really spectacularly complex organic machines. And people who think that need to take a long hard look at their predecessors who asserted that the earth is the center of the universe and man is not a type of animal.
Replies from: Tracy_reader, MixedNuts, Peterdjones↑ comment by Tracy_reader · 2011-06-20T11:52:02.256Z · LW(p) · GW(p)
Um, why would you say so certainly that the computer doesn't have free will? After all, if we're talking about a computer that learns and is intelligent, then we can't dig up the code that makes all of that stuff happen directly - some of the code must trigger the learning, that learning then changes something (as a technical matter I can think of several ways to store the changes). I don't think I'd definitely say that the computer has free will, but I don't think I'd definitely say that the computer doesn't, either. Especially as we don't have a clear definition of free will.
↑ comment by MixedNuts · 2011-06-20T12:03:23.865Z · LW(p) · GW(p)
Yes, naive libertarian free will is silly, but that doesn't explain why people go around saying "free will".
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2011-06-20T12:41:48.054Z · LW(p) · GW(p)
Because that -- "the mind exists out side of physics and manipulates it" -- is what it feels like.
Replies from: cousin_it, Peterdjones↑ comment by cousin_it · 2011-06-20T13:10:07.119Z · LW(p) · GW(p)
From a certain point of view that may even be true :-)
If you're okay with thinking of your mind as an algorithm, then note that any algorithm exists "outside of physics", having instantiations in many different physical worlds and outputting bits into all of them. As Wei Dai once said, "there are copies of me all over math". This idea is controversial, but not obviously false.
Also, Nesov has suggested that physics might arise anthropically from the makeup of our minds ("laws of physics are as complex as minds, but complex details have too little measure to matter"). This idea is even more controversial, but also not obviously false.
None of that has any bearing on libertarian free will, though.
Replies from: Dr_Manhattan↑ comment by Dr_Manhattan · 2011-07-07T12:22:59.766Z · LW(p) · GW(p)
Also, Nesov has suggested that physics might arise anthropically from the makeup of our minds
Interesting; do you have a reference link?
↑ comment by Peterdjones · 2011-07-07T13:08:03.327Z · LW(p) · GW(p)
Nope. It doesn't feel like we can generate an antigravity fieldand fly aroung. I do feel as though I can make choices. What has that got to do with being "outside physcis"? The issues of physical (in) determinism and their relation to FW are technical and complex, and not something that can be intuited. We can't have an intuition of being outside physics because we can't have an intuition of physics that is worth anything.
↑ comment by Peterdjones · 2011-06-20T13:10:16.992Z · LW(p) · GW(p)
We wouldn't be able to predict a machine that tapped into some source of genuine indeterminism;. Maybe we feel we have free will because we are complex indeterministic machines. Maybe we can have a non-naive naturalistic libertarianism. That's a possibility unaddressed by EY;'s solution.
Replies from: linkhyrule5↑ comment by linkhyrule5 · 2013-09-20T01:48:29.899Z · LW(p) · GW(p)
... At most, you'd end up being unable to control yourself. That's what true randomness means, you know.
comment by a._y._mous · 2008-06-06T13:29:05.000Z · LW(p) · GW(p)
Gordon, no. That's not the problem. The problem is with reconciling determinism with probability distribution. The inherent uncertainity is what "free will" is all about.
That I can choose is at the crux of free will. Eliezer goes on about not having the choice not to choose and therefore it is deterministic (or whatever the QM equivalent term he wants to use. You get the picture.) And then you get into definitional issues.
There still is segue missing between bridging this thought with his earlier comments on macro level decoherence and its "collapse into reality". I am looking forward to his building that bridge.
comment by Shane_Legg · 2008-06-06T13:46:04.000Z · LW(p) · GW(p)
@ a. y. mous
Randomness doesn't give you any free will. Imagine that every time you had to make a decision you flipped a coin and went with the coin's decision. Your behaviour would follow a probability distribution and wouldn't be deterministic, however you still wouldn't have any free will. You'd be a slave to the outcomes of the coin tosses.
comment by Caledonian2 · 2008-06-06T14:31:45.000Z · LW(p) · GW(p)
More importanlt, does not decidely negate or affirm the question "do I have free will"?
That isn't the question, really. The question is: Can we articulate an explicit definition for the used-phrase 'free will'?
Once we've done that, we can attempt to determine how the concept applies to various aspects of existence. Without answering that question, responding to any other relevant question is useless.
If I tell you that you possess the property of glixnatech, how does that change your functional understanding of yourself and the universe?
comment by Everett (Stephen_Weeks) · 2008-06-06T14:54:48.000Z · LW(p) · GW(p)
So, can someone please explain just exactly what "free will" is such that the question of whether I have it or not has meaning? Every time I see people asking this question, it's presented as some intuitive, inherently obvious property, but I actually can't see how the world would be different if I do have free will or if I don't. I really don't quite understand what the discussion is about.
comment by Ben_Jones · 2008-06-06T15:03:57.000Z · LW(p) · GW(p)
Stephen, hence the term 'wrong question'. a.y., please restate the question you want confirmed or denied. On your taboo card for this round:
Choice Free Will Deterministic Decide I
Caledonian, strikingly prescient and relevant comment. Did you have something different for breakfast?
comment by Crush_on_Lyle · 2008-06-06T15:21:18.000Z · LW(p) · GW(p)
Stephen: I think it's just an inability to see around an illusion. People think, "It seems like I have free will; I decide to do something and then I do it. How can that not be the case?" But this is just one of many illusions built into our brains, and it's hard to let go of like so many other illusions about time and space and so on that have come up in this series. As far as I can tell, and I've gotten into this discussion with many people, the sticking point is always that illusion.
The other thing that happens is people start to worry about how we can enforce laws/punish criminals and so forth if there's no free will, which just shows how entrenched the illusion is--these people are still imagining that we can somehow step outside the system to make that decision--like criminals don't have free will, but the government still does? They're somehow missing that no-free-will goes all the way up.
comment by Sam_B · 2008-06-06T15:37:55.000Z · LW(p) · GW(p)
The question "do we have free will", which as I understand it is more precisely described as "does the fact that you only ever get to make one choice and experience one outcome make choice an illusion", has two important properties. One, it's completely unanswerable, there being no imaginable evidence that would shift your belief one way or the other. And two, whether your belief is right or wrong has no direct consequences, positive or negative.
A rationalist might see this as a bad thing - a "wrong question" - and so ignore it. But a philosopher might look on this as a biscuit tin that never runs out of biscuits.
comment by spindizzy2 · 2008-06-06T15:58:18.000Z · LW(p) · GW(p)
"people start to worry about how we can enforce laws/punish criminals and so forth if there's no free will"
Interesting observation. Also note how society differentiates between violent criminals and the violent mentally ill.
comment by poke · 2008-06-06T16:00:23.000Z · LW(p) · GW(p)
"Free will" is one of those concepts in philosophy where I have absolutely no idea what it's supposed to be about. I've read a few works on the subject and they all assure me that everyone is convinced they have it. I think the lesson to be learned is that words and concepts have histories of their own and frequently fall out of touch with reality completely. I think "free will" is like that.
comment by Q_the_Enchanter · 2008-06-06T16:16:26.000Z · LW(p) · GW(p)
"Because there are many levels of organization that separate our models of our thoughts - our emotions, our beliefs, our agonizing indecisions, and our final choices - from our models of electrons and quarks."
That's really elegant. Very nice.
What you describe as "requiredism" is pretty much the sort of "compatibilism" espoused by Dennett (among many others -- I'd say the idea traces back to Locke). In any case, I'd agree that a different word for this idea would be useful, one that connotes the rejection of the useless, loaded concept of free will. 'Requiredism' is kind of ugly, though. How about 'conationism'? 'Conative realism'?
comment by Unknown · 2008-06-06T18:07:14.000Z · LW(p) · GW(p)
If free will is defined (I don't see that anyone did it yet here), it is easy to see that it is consistent with many-worlds. Ordinarily free will has a simple definition: if a person is thinking about what to do, there is more than one thing that he can conclude and do.
According to many-worlds, there are many things that he does conclude, and does do. If there are many that he does do, then there are many that he can do. So by this definition of free will, he has free will.
comment by Caledonian2 · 2008-06-06T18:12:13.000Z · LW(p) · GW(p)
Caledonian, strikingly prescient and relevant comment. Did you have something different for breakfast?Have you considered that perhaps the difference is in your perception and not in my content?
Perhaps you are exceptionally glixnatechos at the present time.
comment by LazyDave · 2008-06-06T18:27:54.000Z · LW(p) · GW(p)
I know this is just re-iterating what Caledonian and Ben Jones said, but too have meaningful discussion on this subject you have to taboo "free will" and come up with a specific description of what you are trying to figure out. The most basic concept of free will is "being able to do what you desire to do," and that is not affected one whit by determinism, or MWI, or God knowing what you are going to do in advance, etc. I know there are a lot of other more sophisticated-sounding discussions regarding this ("ah, but can you choose to desire something else", etc) but I have yet to hear of a meaningful definition of "free will" that is affected at all by such things as MWI.
BTW, it drives me nuts when people say "well if we do not have free will, why punish criminals?" (or "we pretend free will exists so that we can punish criminals", etc). We punish criminals so that fewer crimes happen. Whether you think those criminals have "free will" has nothing to do with the results we get by punishing them.
comment by Will_Pearson · 2008-06-06T18:30:55.000Z · LW(p) · GW(p)
Personally I think people are barking up the wrong tree. "Persons" are causally epiphenomenal at the level of physics. So mixing up peoplpe and physics is going to get wrong results.
That is, my model is just the universe (when discussing physics) with nothing inside. I like parsimony for physics, for other things it is not so good, for example normality™.
comment by Matthew_C.2 · 2008-06-06T18:34:39.000Z · LW(p) · GW(p)
In order for you to have free will, there has to be a "you" entity in the first place. . .
comment by Aaron_Boyden · 2008-06-06T18:47:05.000Z · LW(p) · GW(p)
I don't see the need for this new category of "requiredism;" most philosophical compatibilists have thought that free will required determinism. Van Inwagen calls the argument that free will requires determinism the "mind argument" (since there are apparently several papers in Mind from the mid 20th century all making versions of the argument), but it is quite clearly stated as early as Hume.
comment by poke · 2008-06-06T19:20:59.000Z · LW(p) · GW(p)
The "Why punish criminals?" question has a long history. The idea is that if your actions are determined by prior causes then you're no longer blameworthy. I think for most people deterrence would be morally unacceptable if they did not also consider criminals blameworthy. Why not punish their friends and families if that would also act as an effective deterrent? Actually this question - how can we delimit external and internal causes - is more interesting to me than general concepts of free will (short answer: we can't). If you want a nice example of bullet-biting in this area check out Pereboom's Living without Free Will. He argues that we should reject blameworthiness and praiseworthiness and considers it a good thing.
comment by Crush_on_Lyle · 2008-06-06T19:39:10.000Z · LW(p) · GW(p)
But "if your actions are determined by prior causes" then whether or not you think those actions are blameworthy is determined by prior causes too. The act of punishing criminals is subject to the same physics that crime is. So is talking about the act of punishing criminals. And so on.
comment by Caledonian2 · 2008-06-06T19:42:32.000Z · LW(p) · GW(p)
We still haven't been given a clear definition of what the concept consists of, and yet people are already breaking out things philosophers have said about the name.
Where's that definition, folks?
comment by Pablo_Stafforini_duplicate0.27024432527832687 · 2008-06-06T19:42:35.000Z · LW(p) · GW(p)
People hear: "The universe runs like clockwork; physics is deterministic; the future is fixed."
The question of whether the future is "fixed" is unimportant, and irrelevant to the debate over free will and determinism. The future--what will happen--is necessarily "fixed". To say that it isn't implies that what will happen may not happen, which is logically impossible. The interesting question is not about whether the future is fixed, but rather about what fixes the future.
comment by Caledonian2 · 2008-06-06T20:58:00.000Z · LW(p) · GW(p)
If not glixnatech, why sleebn?
comment by poke · 2008-06-06T21:03:17.000Z · LW(p) · GW(p)
Crush on Lyle,
But "if your actions are determined by prior causes" then whether or not you think those actions are blameworthy is determined by prior causes too. The act of punishing criminals is subject to the same physics that crime is. So is talking about the act of punishing criminals. And so on.
I agree. But no philosopher is going to bite that bullet. They'd be out of a job.
comment by Marshall · 2008-06-06T21:03:40.000Z · LW(p) · GW(p)
How can I think a thought? The river that flows without a drop.
Am I thinking the next thought? Chemicals, doing what they ought.
With time an illusion. The I that says it's me, is a figment too.
I struggle to choose to do what must be done.
Don't ask who I am. But observe what was done.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-06T21:19:26.000Z · LW(p) · GW(p)
Mous:
Somehow, I get this nagging feeling that you use 'wrong question' as a, in your own words, 'stop word'. This series is leading upto 'can't say' and 'no answer' for any and all questions that you have explicitly started out to answer.
Sam B:
One, it's completely unanswerable, there being no imaginable evidence that would shift your belief one way or the other. And two, whether your belief is right or wrong has no direct consequences, positive or negative. A rationalist might see this as a bad thing - a "wrong question" - and so ignore it.
Consider rereading Dissolving the Question. You do not ignore "wrong questions". You take a step back and ask the question of cognitive science, "Why is my brain generating the appearance of a question here?" And then you don't come up with some evolutionary argument for why it would be advantageous, because that's not the 'why' you want; you want a detailed walkthrough of the malfunctioning cognitive algorithm.
Here, for example, I have endeavored to show an intuitive, non-explicit internal causal network that will generate wrong questions about conflicts between self and physics.
I liked Sam's biscuit tin analogy, though.
Shane:
In the classical sense "freewill" means that there is something outside of the system that is free to make decisions (at least this is my understanding of it).
But then why not just create a Grand System that includes the free thingy plus the system? Oh noes! Now the Grand System is determined!
comment by TGGP4 · 2008-06-06T21:54:18.000Z · LW(p) · GW(p)
Yet another opportunity for me to plug For the law, neuroscience changes nothing and everything.
comment by Shane_Legg · 2008-06-06T21:58:33.000Z · LW(p) · GW(p)
@ Eliezer:
I don't understand your comment. In case it wasn't clear: I don't believe in the existence of free will in the classical sense.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-06T22:03:13.000Z · LW(p) · GW(p)
Shane, the problem is the concept "outside the system". Outside what system? The system that includes your free will? That's going to be kinda difficult...
comment by Shane_Legg · 2008-06-06T22:21:38.000Z · LW(p) · GW(p)
@ Eliezer:
... which is why I don't believe that I have classical free will.
comment by Pedro · 2008-06-06T22:43:31.000Z · LW(p) · GW(p)
"If not glixnatech, why sleebn?"
If a body's physical/biological state at any given moment is sufficient to determine its state, or behavior, at a future moment, where this body is a closed system until that future moment, then why have the body a first-person ontology at all?
comment by Patrick_(orthonormal) · 2008-06-06T22:45:22.000Z · LW(p) · GW(p)
...I actually can't see how the world would be different if I do have free will or if I don't. (Stephen Weeks)
In order for you to have free will, there has to be a "you" entity in the first place. . . (Matthew C.)
I have an idea where Eliezer is going with this, and I think the above comments are helpful in it.
Seems to me that the reason people intuitively feel there must be some such thing as free will is that there's a basic notion of free vs. constrained in social life, and that we project physical causality of our thoughts to be of the same form.
That is, we tend to think of physical determinism (or probabilistic determinism if we understand it) as if it were the same sort of thing as the way American law constrains our actions, or the way a psychopath holding a gun to our head would do the same. In either case, we can separate the self from the external constraint, and we directly feel that constraint. The fact that our thought processes don't feel constrained by an external agent, then, seems to indicate that they are free from any (deterministic or even probabilistic) necessity.
The falsehood here, as I see it, is that there is no "I" separate from the thoughts, emotions, actions, etc. that are all subject to the physical evolution of my brain; there's no separate thing which is "forced" to go along for the ride. But until we begin to really grasp that (and realize that Descartes was simply wrong in what he thought "Cogito, ergo sum" meant for the self), we have the false dilemma of "free will" versus "physics made me do it".
comment by Patrick_(orthonormal) · 2008-06-06T22:47:12.000Z · LW(p) · GW(p)
If that was ambiguous, I meant that the falsehood was the positing of an "I" separate from the patterns of physical evolution of the brain.
comment by mtraven · 2008-06-07T00:21:02.000Z · LW(p) · GW(p)
Thou are not physics, although though art made from physics. There's a difference.
The real diagram is something like:
me_now ---> me_future ^ ^ | | physics_now =-> physics_future
The above will look like crap due to variable-width fonts, but you get the idea.
The evolution of me has its own rules, which do not violate physics but may be said to transcend them, sort of.
Instead of a person, imagine its a computer we are talking about. All that circuitry is obeying the laws of physics, no doubt, but the evolution of the state of the processor from one cycle to the next is not well-described by physics, but by the abstract formalism the computer was designed to implement. You can talk about a computer in terms of physics, theoretically, but it doesn't get you very far.
What is even more confusing is that the computation is the same whether the computer is made out of silicon or tinkertoys. So it doesn't appear to have much to do with physics, does it? Considering that transhumanists seem to think they can upload their selves onto a different physical substrate, they must not consider themselves to be made up of physics, but Something Else.
comment by Caledonian2 · 2008-06-07T02:12:22.000Z · LW(p) · GW(p)
The evolution of me has its own rules, which do not violate physics but may be said to transcend them, sort of.
Considering that transhumanists seem to think they can upload their selves onto a different physical substrate, they must not consider themselves to be made up of physics, but Something Else.
[NEEDLESS JAB DELETED BY EDITOR.]
The high-level principles that appear to govern the "evolution of you" do not 'transcend' physics, they are rough approximations of the full detail of physics.
As for advocates of uploading, they believe the definitions of their selves are certain properties of the relationships between things, and said properties can be duplicated and transferred between different sets of things. At no point are they 'Something Else'.
You are confusing two distinct meanings; you are associating 'made of physics' with 'made of substance', instead of seeing physics as the ruleset by which we are made.
comment by Utilitarian · 2008-06-07T02:45:25.000Z · LW(p) · GW(p)
The future--what will happen--is necessarily "fixed". To say that it isn't implies that what will happen may not happen, which is logically impossible.
Pablo, I think the debate is over whether there is such a thing as "what will happen"; maybe that question doesn't yet have an answer. In fact, I think any good definition of libertarian free will would require that it not have an answer yet.
So, can someone please explain just exactly what "free will" is such that the question of whether I have it or not has meaning?
As I see it, the real issue is whether it's possible to "have an impact on the way the world turns out." For example, imagine that God is deciding whether or not to punish you in hell. "Free will" is the hope that "there's still a chance for me to affect God's decision" before it happens. If, say, he's already written down the answer on a piece of paper, there's nothing to be done to change your fate.
What I said above shouldn't be taken too literally--I was trying to convey an intuition for a concept that can't really be described well in words. 'Having your fate written down on a piece of paper' is somewhat misleading if interpreted to imply that 'since the answer has been decided, I can now do anything and my fate won't change.' In the scenario where we lack free will, the physical actions taking place right now in our heads and the world around us are the writing down of the answer on the paper, because those are precisely what produce the results that happen.
"Free will" is the idea that there's some sort of "us" whose choices could make it the case that the question of "What will happen?" doesn't yet have an answer (even in a Platonic realm of 'truth') and that this choice is somehow nonarbitrary. I actually have no idea how this could work, or what this even really means, but I maintain some probability that I'm simply not smart enough to understand it.
I do know that if the future is determined, then then whether I believe the right answer about free will (or, perhaps, whether I accede to an incoherent concept people call "free will") is fixed, in the sense of being 'already written down' in some realm of Platonic knowledge. But if not, might there be something I can do (where the 'I' refers to something whose actions aren't yet decided even in a Platonic realm) to improve the truth / coherence of my beliefs?
comment by Unknown · 2008-06-07T04:57:57.000Z · LW(p) · GW(p)
Again, if free will requires that the future not be fixed, then many-worlds implies that free will can exist. According to many-worlds it is impossible to predict the result of a quantum mechanical experiment, precisely because both results must happen to different versions of you. So before you do the experiment, it is completely indeterminate what "you" are going to see.
comment by Ian_C. · 2008-06-07T04:58:21.000Z · LW(p) · GW(p)
Don't forget what the laws of physics are, they are not something out there controlling things, rather we observe patterns and then make up laws to match.
So to those who insist men are governed by such a law, which individual men did you observe and successfully predict the actions of? Do you have their names, or if they wanted to be anonymous, do you at least have a citation for the experiment? Thanks.
comment by a._y._mous · 2008-06-07T05:09:55.000Z · LW(p) · GW(p)
I can't define Free Will. As I said in my earlier disclaimer, I am not an academician in any of the relevant streams. Heck! I am not an acadamecian at all!
"Free will", at least insofar as I am concerned, is exemplified by experiencing a choice and enjoying the resultant consequences of making that choice. True, my experiences, my enjoyment, me, etc. are all governed by the quasi-deterministic-probabilistic-functional-formalisations of physics and/or reality. That does not deny that
a) The agency denoted by "I" have a choice among a multitude of options b) Those choices are manifestly external to the said agency, though the full set is subsumed within physics with a capital p. c) The set of changes implemented over (traditional, granted. No time. No me. No free will. Agreed. But indulge me.) time by the motion of the different agents involved result in differently probable but provably different configurations of physical reality. e) The fact that I know the above does not change the fact the I still have a choice and making a different choice leads to different results.
Meaning to say, "youse doose the crime, youse doose the time", and me not doing the crime would leave the whole world with a different configuration of reality in which I am not locked up in a room behind bars. And that choice of either doing or not doing the crime is "Free Will".
comment by Nick_Tarleton · 2008-06-07T05:22:00.000Z · LW(p) · GW(p)
Unknown, randomness gives no more 'free will' than determinism - which just shows further how incoherent the idea is.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-07T05:27:00.000Z · LW(p) · GW(p)
Utilitarian:
I do know that if the future is determined, then then whether I believe the right answer about free will (or, perhaps, whether I accede to an incoherent concept people call "free will") is fixed, in the sense of being 'already written down' in some realm of Platonic knowledge. But if not, might there be something I can do (where the 'I' refers to something whose actions aren't yet decided even in a Platonic realm) to improve the truth / coherence of my beliefs?
You can improve your beliefs over time. Your choice determines how truly you will believe in a single moment of time. See today's "Timeless Control".
Your future belief is fixed, but it is fixed by your current choice whether to think rationally, not by quarks zipping in from Pluto.
comment by Hopefully_Anonymous3 · 2008-06-07T06:20:00.000Z · LW(p) · GW(p)
a. y. mous, in my estimation you're on surest ground describing free will as an "experience". Given all the ways we've already discovered that the experience seems to be illusory, it seems to me to be quite likely that free will is in every way illusory. You also use the word "enjoying", which I like. I consider the enjoyment of a free will experience to be a luxury to indulge in to the the degree that it maximizes my persistence odds (given how unfriendly reality seems to be to my long-term persistence). Beyond that, scientific inquiry into the free-will experience does seem to be important to me, because it seems such a fundamental element of the general human subjective conscious experience. It would be wisely conservative, in my opinion, to place priority on preserving that part of the bundle of human subjective conscious experience as we seek various solutions to the mortality challenges we face.
comment by Hopefully_Anonymous3 · 2008-06-07T06:23:00.000Z · LW(p) · GW(p)
"Your future belief is fixed, but it is fixed by your current choice whether to think rationally, not by quarks zipping in from Pluto."
You sound sure about that (the belief that people have a choice whether to think rationally). I'm curious what you base your sureness on? I'm not sure that any person or entity has a "choice" in that matter, but I'm interested in the best evidence/arguments to the contrary.
comment by a._y._mous · 2008-06-07T08:35:00.000Z · LW(p) · GW(p)
@ Hopefully Anonymous
>> a. y. mous, in my estimation you're on surest ground describing free will as an "experience".
And that is where I plan to stand at all times! Perhaps a maxim that I have followed over the years would help you in understanding where I come from
Science without morality is knowledge
Knowledge without application is technology
Technology without context is data
Data without perception does not exist.
comment by Caledonian2 · 2008-06-07T13:30:00.000Z · LW(p) · GW(p)
[NEEDLESS JAB DELETED BY EDITOR.]Um... that was a thesis.
As in, I state the position that I intend to offer arguments for, and then I offer arguments and explain how they lead to that position.
So - let me get this straight - you have no problem with my showing how someone else's claims are wrong, I just can't say that they're wrong? Because that's a 'needless jab'?
comment by Pablo_Stafforini_duplicate0.27024432527832687 · 2008-06-07T16:25:00.000Z · LW(p) · GW(p)
I think the debate is over whether there is such a thing as "what will happen"; maybe that question doesn't yet have an answer. In fact, I think any good definition of libertarian free will would require that it not have an answer yet.
Utilitarian, if it is now raining in Oxford, how could the sentence 'It will rain in Oxford tomorrow' have failed to have been true yesterday?
comment by Unknown3 · 2008-06-07T18:07:00.000Z · LW(p) · GW(p)
Pablo, according to many worlds, even if it is now raining in Oxford, yesterday "it will rain in Oxford tomorrow" and "it will not rain in Oxford tomorrow" were both equally true, or both equally false, or whatever. In any case, according to many worlds, there is no such thing as "what will happen", if this is meant to pick some particular possibility like rain in Oxford.
comment by mtraven · 2008-06-07T18:13:00.000Z · LW(p) · GW(p)
Caledonian, it would improve discussion if you would make an effort to try to understand what I'm saying rather than flatly declaring "you're wrong". That being said, I'm not sure why you were redacted, that didn't make a lot of sense.
As for advocates of uploading, they believe the definitions of their selves are certain properties of the relationships between things, and said properties can be duplicated and transferred between different sets of things. At no point are they 'Something Else'.
"Properties of the relationship between things" is not a physical concept, so it indeed appears to be "something else".
Take the idea of the letter "A". It is composed of parts in certain relations -- three lines in a configuration. It's the same letter whether the lines are made up of pixels on a screen or ink on a page. Interestingly, it's the same letter even if some of the lines are curved slightly, or thickened, or enhanced with serifs -- Doug Hofstadter has written about this particular example. All of these cases are composed of physics, and no violation of physical law is going on, but the physics in the various cases have nothing in common. So whatever makes A-ness would appear to be "something else".
comment by Caledonian2 · 2008-06-07T18:50:00.000Z · LW(p) · GW(p)
Caledonian, it would improve discussion if you would make an effort to try to understand what I'm saying rather than flatly declaring "you're wrong".I've already understood what you're saying - far better than you do - and you've just ignored my explanation of why you're wrong.
"Properties of the relationship between things" is not a physical concept, so it indeed appears to be "something else".Everything we consider 'physical objects' are complex sets of relationships between things. Is your body physical? What about the computer keyboard you're typing on?
They're collections of components, arranged in particular ways. It is entirely possible to change the components without changing the arrangement in any way relevant to our categorical perceptions. Whether anything has changed depends entirely on what level you look at. Is it the same river when the flowing water has been completely replaced? That depends on what is meant by 'same', and that depends on what level you're talking about.
There is no "something else".
comment by poke · 2008-06-07T19:03:00.000Z · LW(p) · GW(p)
mtraven, I think your example demonstrates well why computationalism rests on a basic error. The type-token relationship between A-ness and instances of the letter "A" is easily explained: what constitutes A-ness is a social convention and the various diverse instances of "A" are produced as human artifacts with reference to that convention. They all exhibit A-ness because we made them that way. Computers are like this too. Computers can be made from different substrates because they only have to conform to our conventions of how a computer should operate.
The brain is not a computer. Nothing that is not an artifact can possibly be a computer in any meaningful sense (just like a bunch of stones that fall into a pattern resembling the letter "A" aren't the letter "A" in any meaningful sense). It's completely meaningless to call something a "computer" in the way computationalists do. It would make as much sense for me to call the coffee cup resting on my desk an "equation" as it does to call a brain a computer. The coffee cup can be described by an equation. If I throw the coffee cup, for example, I can describe its motion using the standard equations of rigid body dynamics. But the equations I wrote out would not be a coffee cup. The equations are just marks that by convention stand for the motion of a coffee cup.
For some reason, which can probably only be explained through some mix of historical contingency and malicious intentions, people have come up with the idea that when I take that equation and use numerical methods to step through it in a computer program it suddenly becomes the thing it describes. This is rather like thinking a drawing becomes the object it depicts if I turn it into a flip book. Actually, this analogy is very accurate, because as computer program is essentially an equation in flip book form. Anything that can be said about a computer program can also be said of an equation scrawled on a napkin. So, no, you're not a computer or a computation or an equation, you're a physical object.
comment by mtraven · 2008-06-07T19:06:00.000Z · LW(p) · GW(p)
Caledonian, you are a bore. You don't understand what I'm saying, and you are so convinced of your rightness that you can't even be bothered to try -- it's amusing that you claim to know what I mean "far better" than I do myself.
The relationship between higher-level entities, properties, symbols, and relations and their underlying physical substrates is an interesting and problematic area, but I guess we're not going to get any new insights into it here. Pity.
comment by mtraven · 2008-06-07T19:34:00.000Z · LW(p) · GW(p)
poke, thanks for the serious reply. You've redeemed the conversation for me.
Here's my view of computationalism: the computer is a highly imperfect model of human thought. If you look at the historical development of the computer, it evolved as an attempt to mechanize thought. Despite its imperfections, it's the best model we have, and it helps us understand real brains. Various insoluble philosophical problems appear in the computer as engineering problems, which does not exactly solve the real problems but helps get a better handle on them.
For instance, the old problem of mind/body dualism was recreated in the computer and appears as the less mysterious hardware/software dualism. Suddenly we have a model for how physical systems and symbolic systems can depend on and interact with each other. That's very powerful. But I don't believe (as some of the more callow reductionists do) that we have thereby completely solved or gotten rid of the original question.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-07T20:14:00.000Z · LW(p) · GW(p)
Caledonian:
So - let me get this straight - you have no problem with my showing how someone else's claims are wrong, I just can't say that they're wrong? Because that's a 'needless jab'?
That is exactly right. The part where you explain why someone else's claims are wrong is the conversation. The part where you say "No. You're wrong." on a separate line, occupies a continuous spectrum with "You're an idiot.", which you also say every now and then; it does not occupy a continuous spectrum with those actual arguments that you make.
comment by Caledonian2 · 2008-06-07T20:36:00.000Z · LW(p) · GW(p)
Nothing that is not an artifact can possibly be a computer in any meaningful senseNo, a computer is a thing which computes - a general-purpose logical operations performer.
Which is why the individuals in WWII who performed extended mathematical calculations (mostly women) were called 'computers'. The term only began being applied to devices only in the vernacular when the Electronic Age resulted in many electronic computers being built.
The part where you say "No. You're wrong." on a separate line,So your whole complaint is about a formatting issue? If the line break were removed, you'd cease perceiving the comment as an insult? So why didn't you remove the line break instead of deleting the text?
comment by poke · 2008-06-07T23:48:00.000Z · LW(p) · GW(p)
mtraven, The computer started as an attempt to mechanize calculation. There's a tradition in mathematics, going back to the Greeks and popular with mathematicians, that mathematics is exemplary reasoning. It's likely that identifying computation and thought builds off that. If calculation/mathematics is exemplary thought and computers mechanize calculation then computers mechanize thought.
I would argue instead that mathematics is actually exemplary (albeit creative) tool-use. This is especially stark if you look at the original human computers Caledonian mentioned: they worked from rules and lacked knowledge of the overall calculation they were taking part in. I think computers mechanized precisely what they mechanized and nothing more: the calculation and not the person performing it.
I disagree that it's our best model; I find it too misleading. I think you identify why it's popular though: computationalism lets us sneak dualism through the back door. Supposedly one can now be a materialist and hold that the mind is software instantiated on the hardware of the brain. That's an extremely useful premise if you're a philosopher or a psychologist who doesn't want to crack open a biology textbook. Also, the evidence that the brain engages in symbol processing is very weak, so I don't think it's necessary to invoke computationalism there.
I don't mean to imply that computer science only applies to computers though. We can apply the tools of computer science to the real world. We can talk about the computational limits of physical systems and so forth.
comment by mtraven · 2008-06-08T03:09:00.000Z · LW(p) · GW(p)
I disagree that it's our best model; I find it too misleading.
Got a better one?
Also, the evidence that the brain engages in symbol processing is very weak...
Presumably your brain is processing symbols right now, as your read this.
You probably are questioning if there is symbol processing going on underneath the obvious top level -- somewhere between quantum physics and the chemistry of neurons, and thinking. The answer appears to be that the brain uses a whole variety of representations, which vary in how much they look like symbols or like other things, such as image maps.
comment by Caledonian2 · 2008-06-08T03:20:00.000Z · LW(p) · GW(p)
Supposedly one can now be a materialist and hold that the mind is software instantiated on the hardware of the brain. That's an extremely useful premise if you're a philosopher or a psychologist who doesn't want to crack open a biology textbook.But computer programmers don't need to understand the hardware, either. Do you think they crack open metallurgy, electronics, and applied physics textbooks to accomplish their goals?
If you don't need to understand every level of hardware to manipulate electronic computational devices, why do you think anyone would need to understand the physics all the way down to deal with the mind?
It's not dualism. It's just a distinction between levels of implementation.
comment by Michiel_Trimpe · 2008-06-08T17:07:00.000Z · LW(p) · GW(p)
Elezier, have you ever read the paper: 'Consciousness: A Hyperspace View' by Saul-Paul Sirag? I don't have enough fundamental knowledge to determine whether the paper is theoretical crackpottery or serious physics, but for me it's description of consciousness being an extra dimension certainly sounded just right.
The theory basically proposed (supported by equations which were far beyond my current grasp) that if one keeps following the 'observers observing the observers' chain to it's inevitable conclusion an extra dimension of 'consciousness' could be added to string theory; which would mean that consciousness is an inherent property in the universe and that we as humans are merely slowly gaining more and more access to it.
comment by Psy-Kosh · 2008-06-08T17:26:00.000Z · LW(p) · GW(p)
Michiel: Does it explain (or even in a sesible way explain away) the key issue of consiousness? That is, the whole business about subjective experience? The whole "there's something that it's like to be me" thing? If no, it's not actually explaining, well, the actual question?
For that matter... what is it about human braincells that allow them to interact with this extra dimension in a nontrivial way?
comment by poke · 2008-06-08T17:37:00.000Z · LW(p) · GW(p)
mtraven,
Got a better one?
Biology and physics. Google Tim Van Gelder for a philosophical perspective on the benefits of using dynamics to explain cognition. I think he has papers online.
Presumably your brain is processing symbols right now, as your read this.
I think there's an important distinction between being able to manipulate symbols and engaging in symbol processing. After all, I can use a hammer, but nobody thinks there's hammers in my brain.
Caledonian,
But computer programmers don't need to understand the hardware, either. Do you think they crack open metallurgy, electronics, and applied physics textbooks to accomplish their goals?
Computers are specifically designed so that we don't have to understand the hardware. That's why I said it's spurious to call anything but an artifact a computer. You don't need to understand the underlying physics because engineers have carefully designed the system that way. You don't have to understand how your washing machine or your VCR works either.
If you don't need to understand every level of hardware to manipulate electronic computational devices, why do you think anyone would need to understand the physics all the way down to deal with the mind?
I don't think we need to understand the physics all the way down in a practical sense. We've already built our way up from physics through chemistry to molecular biology and the behavior of the cell. We can talk about the behavior of networks of cells too. The difference is that it's the underlying physical properties that make this abstraction possible whereas, in a computer, the system has been specifically designed to have implementation layers with reference to a set of conventions. In a loose sense, it's accurate to say we understand the physics all the way down in a biological system, because the fact of abstraction is a part of the system (i.e., the molecules interact in a way that allows us to treat them statistically).
comment by Caledonian2 · 2008-06-08T19:06:00.000Z · LW(p) · GW(p)
Computers are specifically designed so that we don't have to understand the hardware.
No, poke. It's true of any information-processing device.
That's why I said it's spurious to call anything but an artifact a computer. You don't need to understand the underlying physics because engineers have carefully designed the system that way.
NO, poke. It doesn't matter how the computation is carried out, as long as it is. The specific design is irrelevant.
comment by mtraven · 2008-06-08T22:49:00.000Z · LW(p) · GW(p)
poke -- by a weird coincidence I was just looking at the software from van Gelder's company, Austhink (they make systems for argument mapping). I never read his dynamic cognition papers, but it seems to be rather similar to the critiques of GOFAI (good old-fashioned AI) that were made in the 90s by neural-net people and the situated action people. There is some validity to these critiques, a lot actually, but in a sense they are attacking a strawman. Nobody really believes the brain is a classic Turing machine; even if it is doing symbol processing it is doing it in a massively parallel, associative style. But it is doing some sort of computation (a variety of sorts, actually), and nobody has come up with a better way of theorizing about what it is doing that computationalism.
Computers are specifically designed so that we don't have to understand the hardware. That's why I said it's spurious to call anything but an artifact a computer.Practially, programming computers usually requires having understandings one or two levels below the level you would like to. If I'm coding something, I would like to think in terms of pure algorithms but end up having to think about clock speeds, memory locality, and (if you are Google) heating and electrical supply issues. Computers do a better job of separating out levels than biology, because they are designed that way, but in both cases you have different levels of operation built out of underlying levels.
To return to the original issue, the question of what is the ontological status of entities and processes that exist at higher levels of this stack. They are certainly made of physics, but are they physics? This is a hard question that refuses to go away, except by declaring at so as some reductionists would like to do.
comment by Gene · 2009-02-22T02:46:00.000Z · LW(p) · GW(p)
So then a Turing Test is passed when the physics governing the relationship between you and the AI decides that the AI is now just I. Sounds like you found a correlation between chaos theory and physics within human cognition. Or better yet, it sounds like physics just looked in the mirror.
comment by KIWIJARED · 2009-07-22T10:04:58.177Z · LW(p) · GW(p)
Well, it all seems to be cause and effect, and until effects overlap/intersect, we remain unaware and unaffected. Afterwards, in retrospect we can deem things this or that.
Replies from: Jerry_↑ comment by Jerry_ · 2010-01-19T14:34:54.251Z · LW(p) · GW(p)
"In a long essay called 'What is Life', the great physicist Erwin Schrödinger comes up with the following argument:
Given that i) my body functions as a pure mechanism according to laws of nature,
and that ii) I know by direct experience that I am directing the motions of my body,
it follows that iii) I am the one who directs the atoms of the world in their motions.
Schrödinger remarks, '...it is daring to give to this conclusion the simple wording that it requires. In Christian terminology to say, "Hence I am God Almighty" sounds both blasphemous and lunatic.'"
I believe this summary is Rudy Rucker's from his book "Seek!". It does seem to wrap up consciousness/God/Thou Art Physics into a neat little bundle. A bundle of crazy? Perhaps, but a neat and little one.
comment by Jerry_ · 2010-01-19T14:32:54.964Z · LW(p) · GW(p)
"In a long essay called 'What is Life', the great physicist Erwin Schrödinger comes up with the following argument:
Given that i) my body functions as a pure mechanism according to laws of nature,
and that ii) I know by direct experience that I am directing the motions of my body,
it follows that iii) I am the one who directs the atoms of the world in their motions.
Schrödinger remarks, '...it is daring to give to this conclusion the simple wording that it requires. In Christian terminology to say, "Hence I am God Almighty" sounds both blasphemous and lunatic.'"
I believe this summary is Rudy Rucker's from his book "Seek!". It does seem to wrap up consciousness/God/Thou Art Physics into a neat little bundle. A bundle of crazy? Perhaps, but a neat and little one.
comment by AGirlAlone · 2012-02-08T16:06:20.355Z · LW(p) · GW(p)
Similar ideas as Eliezer can occur to people without proper physics, experimental spirit or understanding of the brain (but I am not sure I can say "without rationality", as the Art may not be what I think it to be). I mean,some Indian spiritual traditions have explicitly stated that although you feel and believe that you have a real self, although you feel your existence as an entity strongly, this is not acceptable evidence for the existence of your "self". This is their key to selflessness. In other words, you may feel your existence outside of physics or whatever reality you believe in, and yet you should not trust this feeling. This sounds rational to me, but is further complicated by the fact that their tenets call for the abandonment of self, and thus the conclusion was not drawn on a fair ground. Also, the follow-up question of life-choices and meaning is dissolved by obligations that mainly consists of living an intellectual life as prescribed. I do not recommend reading this kind of material, it can hurt. I'm just making a point, that even without a scientific method, even while thinking your attitudes can control your afterlife, you can start having these meta thoughts and actually be somewhat right. Maybe this fact is relevent to, um, AI theory?