Righting a Wrong Question
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-03-09T13:00:00.000Z · LW · GW · Legacy · 111 commentsContents
111 comments
When you are faced with an unanswerable question—a question to which it seems impossible to even imagine an answer—there is a simple trick which can turn the question solvable.
Compare:
- "Why do I have free will?"
- "Why do I think I have free will?"
The nice thing about the second question is that it is guaranteed to have a real answer, whether or not there is any such thing as free will. Asking "Why do I have free will?" or "Do I have free will?" sends you off thinking about tiny details of the laws of physics, so distant from the macroscopic level that you couldn't begin to see them with the naked eye. And you're asking "Why is X the case?" where X may not be coherent, let alone the case.
"Why do I think I have free will?", in contrast, is guaranteed answerable. You do, in fact, believe you have free will. This belief seems far more solid and graspable than the ephemerality of free will. And there is, in fact, some nice solid chain of cognitive cause and effect leading up to this belief.
If you've already outgrown free will, choose one of these substitutes:
- "Why does time move forward instead of backward?" versus "Why do I think time moves forward instead of backward?"
- "Why was I born as myself rather than someone else?" versus "Why do I think I was born as myself rather than someone else?"
- "Why am I conscious?" versus "Why do I think I'm conscious?"
- "Why does reality exist?" versus "Why do I think reality exists?"
The beauty of this method is that it works whether or not the question is confused. As I type this, I am wearing socks. I could ask "Why am I wearing socks?" or "Why do I believe I'm wearing socks?" Let's say I ask the second question. Tracing back the chain of causality, I find:
- I believe I'm wearing socks, because I can see socks on my feet.
- I see socks on my feet, because my retina is sending sock signals to my visual cortex.
- My retina is sending sock signals, because sock-shaped light is impinging on my retina.
- Sock-shaped light impinges on my retina, because it reflects from the socks I'm wearing.
- It reflects from the socks I'm wearing, because I'm wearing socks.
- I'm wearing socks because I put them on.
- I put socks on because I believed that otherwise my feet would get cold.
- &c.
Tracing back the chain of causality, step by step, I discover that my belief that I'm wearing socks is fully explained by the fact that I'm wearing socks. This is right and proper, as you cannot gain information about something without interacting with it.
On the other hand, if I see a mirage of a lake in a desert, the correct causal explanation of my vision does not involve the fact of any actual lake in the desert. In this case, my belief in the lake is not just explained, but explained away.
But either way, the belief itself is a real phenomenon taking place in the real universe—psychological events are events—and its causal history can be traced back.
"Why is there a lake in the middle of the desert?" may fail if there is no lake to be explained. But "Why do I perceive a lake in the middle of the desert?" always has a causal explanation, one way or the other.
Perhaps someone will see an opportunity to be clever, and say: "Okay. I believe in free will because I have free will. There, I'm done." Of course it's not that easy.
My perception of socks on my feet, is an event in the visual cortex. The workings of the visual cortex can be investigated by cognitive science, should they be confusing.
My retina receiving light is not a mystical sensing procedure, a magical sock detector that lights in the presence of socks for no explicable reason; there are mechanisms that can be understood in terms of biology. The photons entering the retina can be understood in terms of optics. The shoe's surface reflectance can be understood in terms of electromagnetism and chemistry. My feet getting cold can be understood in terms of thermodynamics.
So it's not as easy as saying, "I believe I have free will because I have it—there, I'm done!" You have to be able to break the causal chain into smaller steps, and explain the steps in terms of elements not themselves confusing.
The mechanical interaction of my retina with my socks is quite clear, and can be described in terms of non-confusing components like photons and electrons. Where's the free-will-sensor in your brain, and how does it detect the presence or absence of free will? How does the sensor interact with the sensed event, and what are the mechanical details of the interaction?
If your belief does derive from valid observation of a real phenomenon, we will eventually reach that fact, if we start tracing the causal chain backward from your belief.
If what you are really seeing is your own confusion, tracing back the chain of causality will find an algorithm that runs skew to reality.
Either way, the question is guaranteed to have an answer. You even have a nice, concrete place to begin tracing—your belief, sitting there solidly in your mind.
Cognitive science may not seem so lofty and glorious as metaphysics. But at least questions of cognitive science are solvable. Finding an answer may not be easy, but at least an answer exists.
Oh, and also: the idea that cognitive science is not so lofty and glorious as metaphysics is simply wrong. Some readers are beginning to notice this, I hope.
111 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Henrik_Jonsson · 2008-03-09T13:51:14.000Z · LW(p) · GW(p)
This is one of my all-time favourite posts of yours, Eliezer. I can recognize elements of what you're describing here in my own thinking over the last year or so, but you've made the processes so much more clear.
As I'm writing this, just a few minutes after finishing the post, it's increasingly difficult not to think of this as "obvious all along" and it's getting harder to pin down exactly what in the post that caused me to smile in recognition more than once.
Much of it may have been obvious to me before reading this post as well, but now the verbal imagery needed to clearly explain these things to myself (and hopefully to others) is available. Thank you for these new tools.
comment by Will_Pearson · 2008-03-09T15:10:52.000Z · LW(p) · GW(p)
I'm sure the meta-physicists will suggest something like the following. How do you know the causal chain you trace is meaningful? That is you are resting our ability to see thing on physics, and our ability to have a valid physics on being able to see things in the world. It is self-reinforcing but requires axioms taken on faith or blind chance to start things off. So is not really the same thing as meta-physics.
My reply would be to say, "Well, it works so far." And then get on with my life, and not worry about it.
comment by Ron_Hardin · 2008-03-09T15:44:50.000Z · LW(p) · GW(p)
``Why do I think I can avoid literary effects and reason directly instead?''
comment by Caledonian2 · 2008-03-09T16:01:39.000Z · LW(p) · GW(p)
"Why do I think it is guaranteed that I think things for a reason, instead of for no reason at all?"
Replies from: Kingreaper, DanielLC, jwoodward48↑ comment by Kingreaper · 2010-11-26T00:54:34.051Z · LW(p) · GW(p)
If I thought things for no reason at all my thoughts and feelings would be unconnected to any efforts or lack thereof on my part.
This scenario provides no preferred course of action, and can thus be safely discarded from this and all future considerations. Indeed, if it is correct, I cannot discard it, so I am safe in discarding it even if my sole aim is truth, rather than preferred courses of action.
↑ comment by DanielLC · 2011-11-27T21:42:26.630Z · LW(p) · GW(p)
Occam's razor. There are patterns in your thoughts that are very unlikely to exist by coincidence. It's more likely that the pattern is a result of an underlying process. At least, that's why I think that I think things for a reason.
Replies from: 3p1cd3m0n↑ comment by jwoodward48 · 2017-03-02T23:40:57.324Z · LW(p) · GW(p)
What is a "reason"? Nothing but a cause (that is meaningfully, reasonably, and predictably tied to the effect, perhaps). The only cases in which a mind has a spontaneous thought (that is, one with no reason for them), are "brain static" and Boltzmann brains. So your question is essentially reducible to the question of "Why am I not a Boltzmann brain?"
Edit: I'm not really sure that "reason" is equivalent to "cause", on further reflection. There needs to be a deeper connection between A and B, if A is said to be the reason and not just the cause for B. So if the cause for "thinking that one has free will" is simply "that is how brain architecture works", and not some previously-unknown phenomenon, that might not be seen as a reason for the illusion of free will.
comment by PK · 2008-03-09T17:12:21.000Z · LW(p) · GW(p)
OK, time to play:
Q: Why am I confused by the question "Do you have free will?"? A: Because I don't know what "free will" really means. Q: Why don't I know what "free will" means? A: Because there is no clear explanation of it using words. It's an intuitive concept. It's a feeling. When I try to think of the details of it, it is like I'm trying to grab slime which slides through my fingers. Q: What is the feeling of "free will"? A: When people talk of "free will" they usually put it thusly. If one has "free will", he is in control of his own actions. If one doesn't have "free will" then it means outside forces like the laws of physics control his actions. Having "free will" feels good because being in control feels better then being controlled. On the other hand, those who have an appreciation for the absolute power of the laws of physics feel the need to bow down to them and acknowledge their status as the ones truly in control. The whole thing is very tribal really. Q: Who is in control, me or the laws of physics? A: Since currently saying [I] is equivalent to saying [a specific PK shaped collection of atoms operating on the laws of physics], then saying "I am in control" is equivalent to saying "a specific PK shaped collection of atoms operating on the laws of physics is in control". The laws of physics are not an outside force apart from me, they are inside me too. Q: Why do people have a tendency to believe their minds are somehow separate from the rest of the universe? A: Ugghhh... I don't know the details well enough to answer that.
Replies from: jwoodward48↑ comment by jwoodward48 · 2017-03-02T23:42:53.605Z · LW(p) · GW(p)
"Why do people have a tendency to believe that their minds are somehow separate from the rest of the universe?"
Because the concept of self as distinct from one's surroundings is part of subjective experience. Heck, I'd consider it to be one of the defining qualities of a person/mind.
comment by Psy-Kosh · 2008-03-09T17:57:12.000Z · LW(p) · GW(p)
Q: Why do I think there is something instead of nothing? A: Because I think I'm experiencing, well, something. Q: Why do I think I'm experiencing something?
A: uh... dang, the urge is overwelming for me to say "Because I actually am experiencing something. That's the plainest fact of all, even though evidence in favor of it seems to be at the moment the least communicable sort of evidence of them all."
argh!
So, I see at least two possibilities here:
Either I'm profoundly confused about something, causing me to seem to think that I can't possibly be experiencing the thought of thinking I'm conscious without, well... experiencing it. (I think I experience the thought that I'm consciouss? But it sure seems like I'm experiencing that thought... argh...) so either way there's some profound confusion going on in my head.
Or I'm confused partly because I'm trying to think of what sort of state of affairs could result in me seeming to think I'm conscious without actually being so (I'm not talking about philosophical zombies here, I mean from the inside), and am confused because it may really be as incoherent an idea as it seems to me.
The question of free will at least "feels" solvable. That it can be broken down into more basic things. These two (why is there something instead of nothing, and what's the nature of consiousness (as in "feels like from the inside"/qualia/etc) are the Langford philosophical basilisk questions. May not have anything to do with the nature of the question itself, but seems to fry my brain any way I bang my head at it. :)
Replies from: bruno-mailly↑ comment by Bruno Mailly (bruno-mailly) · 2018-08-13T08:36:07.206Z · LW(p) · GW(p)
why is there something instead of nothing
Don't forget the third alternative [LW · GW] : why is there something instead of something else ?
One idea is that there are unlimited potential universes, each running on different fundamental laws, most being poor and sterile. But because of survivor (existence ?) bias, intelligent forms can only observe a universe rich enough to hold them.
Scientist went this way and imagined other laws in order to prove that ours are the only possible. Instead, they found that some alternative algebras, geometries etc do make sense.
This neither answers nor dissolves the question [LW · GW], but it does hint to look elsewhere.
what's the nature of consciousness
Children undergo a fundamental mind-building step when they realize they are not the universe. That there are things out there that don't follow their thoughts, and (the horror !) don't even know about them. That the self is separate from everything else. Thus, becoming aware of themselves, and their place in the world.
Feeling conscious seems the way we do that.
Replies from: TAG↑ comment by TAG · 2018-08-13T17:34:56.974Z · LW(p) · GW(p)
That's starting at the finishing line. The hard problem of consciousness is about why there should be feelings at all, not about why we feel particular things.
Replies from: bruno-mailly↑ comment by Bruno Mailly (bruno-mailly) · 2018-10-02T11:50:46.921Z · LW(p) · GW(p)
Okay. Q: Why do I think I am conscious ?
A: Because I feel conscious.
Q: Why ?
A: Like all feelings, it was selected by evolution to signal an important situation and trigger appropriate behavior.
Q: What situation ? What behavior ?
A: Modeling oneself. Paying extra attention.
Q: And how ?
A: I expect a kluge fitting of the blind idiot god, like detecting when proprioception matches and/or drives agent modeling, probably with feedback loops. This would lower environment perception, inhibit attention zapping etc., leading to how consciousness feels.
It's a far cry from a proper explanation, yet it already makes so much sense.
Asking the right questions did dispel much of the mystery.
Replies from: SaidAchmiz, TAG↑ comment by Said Achmiz (SaidAchmiz) · 2018-10-02T14:22:52.984Z · LW(p) · GW(p)
Q: Why do I think I am conscious ?
A: Because I feel conscious.
Q: Why ?
A: Like all feelings, it was selected by evolution to signal an important situation and trigger appropriate behavior.
This is a design-stance explanation, which, firstly, is inherently problematic when applied to evolution (as opposed to a human designer), and, more importantly, doesn’t actually explain anything.
The Hard Problem of Consciousness is the problem of giving a functional (physical-stance, more or less—modulo the possibility of lossless abstraction away from “implementation details” of functional units) explanation of why we “feel conscious” (and just what exactly that alleged “feeling” consists of).
What’s more, even if we accept the rest of your (evolutionary) explanation, notice that it doesn’t actually answer the question, since everything you said about selection for certain functional properties, etc., would remain true even in the absence of phenomenal, a.k.a. subjective, consciousness (i.e., “what it is like to be” you).
You have, in short, managed to solve everything but the Hard Problem!
Replies from: bruno-mailly↑ comment by Bruno Mailly (bruno-mailly) · 2018-10-08T11:33:21.751Z · LW(p) · GW(p)
This is a design-stance explanation...
I worded poorly, but evolution does produce such apparent result.
The Hard Problem of Consciousness
Is way out my league, I did not pretend to solve it : "It's a far cry from a proper explanation".
But pondering it led to another find : "Feeling conscious" looks like an incentive to better model oneself, by thinking oneself special, as having something to preserve... which looks a lot like the soul.
A simple, plausible explanation that dissolves a mystery, works for me ! (until better is offered)
That line of thinking goes places, but here is not the place to develop it.
↑ comment by TAG · 2018-10-03T10:27:54.782Z · LW(p) · GW(p)
A: Like all feelings, it was selected by evolution to signal an important situation and trigger appropriate behavior.
Agian, you are assuming there is no big deal about
why do I feel (anything at all),
and therefore the only issue is
why do I feel conscious
Replies from: TAG↑ comment by TAG · 2018-10-03T10:29:09.828Z · LW(p) · GW(p)
Try taking a step back an wondering why consciousness is considered mysterious when it has such a simple explanation.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2018-10-03T12:07:03.226Z · LW(p) · GW(p)
(That may be a useful clue for identifying the meaning of the question, as understood by the people pursuing it, but not necessarily a good reason to agree that it currently should be considered mysterious or that it's a sensible question to pursue.)
Replies from: TAGcomment by Silas · 2008-03-09T18:13:12.000Z · LW(p) · GW(p)
Eliezer Yudkowsky: (can we drop the underscores now?): You did not break the "perception of wearing socks" into understandable steps, as you demanded for the perception of free will. You certainly explained non-confusingly some of the steps, but you left out a very critical step, which is the recognition of socks within the visual input that you receive. That is a very mysterious step indeed, since your cognitive architecture is capable of recognizing socks within an image, even against an arbitrary set of transformations: rotation, blurring, holes in the socks, coloration, etc.
And I know you didn't simply leave out an explanation that exists somewhere, because such understanding would probably mean a solution for the captcha problem. So I would have to say that made the same unacceptable leap that you attacked in the free will example.
comment by Caledonian2 · 2008-03-09T18:31:14.000Z · LW(p) · GW(p)
What is the phrase 'free will' used to refer to? We cannot even start worrying about whether we need to answer or abolish the question until we understand what the question signifies.
We could ask ourselves "what happens when an immovable object meets an irresistible force?", and recognize that this question must be unasked. The reason why it's not a valid question is that the definitions of those two things turn out to be mutually contradictory once we analyze them down to their constituent parts.
comment by James_Blair · 2008-03-09T18:36:42.000Z · LW(p) · GW(p)
And I know you didn't simply leave out an explanation that exists somewhere, because such understanding would probably mean a solution for the captcha problem.Dileep, George, and Hawkins, Jeff. 2005. "A Hierarchical Bayesian Model of Invariant Pattern Recognition in the Visual Cortex." available from citeseer (direct download pdf) (Accessed November 9, 2011). Replies from: Celer
↑ comment by Celer · 2011-11-08T17:09:28.394Z · LW(p) · GW(p)
Your link is broken, as is the one on the Wikipedia page. http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCMQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.132.6744%26rep%3Drep1%26type%3Dpdf&ei=PGG5TovEKsj4rQfGlaHGBg&usg=AFQjCNFMrZJFKOBU6M_ItHfkT4YB6gL8aQ&sig2=jI0CAN1iSRisyrwH4hIdaQ works.
Replies from: James_Blair↑ comment by James_Blair · 2011-11-09T01:41:53.460Z · LW(p) · GW(p)
Linkrot corrected. Thanks for the catch.
Historical notes: Eliezer disapproves of this reference; the original comment was posted on Overcoming Bias, which didn't allow nested replies, Frank Hirsch had some comments as well [1] [2].
comment by Tiiba2 · 2008-03-09T19:23:10.000Z · LW(p) · GW(p)
I think there is a real something for which free will seems like a good word. No, it's not the one true free will, but it's a useful concept. It carves reality at its joints.
Basically, I started thinking about a criminal, say, a thief. He's on trial for stealing a dimond. The prosecutor thinks that he did it of his own free will, and thus should be punished. The defender thinks that he's a pathological cleptomaniac and can't help it. But as most know, people punish crimes mostly to keep them from happening again. So the real debate is whether imprisoning the thief will discourage him.
I realized that when people think of the free will of others, they don't ask whether this person could act differently if he wanted. That's a Wrong Question. The real question is, "Could he act differently if I wanted it? Can he be convinced to do something else, with reason, or threats, or incentives?"
From your own point of view that stands between you and being able to rationally respond to new knowledge makes you less free. This includes shackles, threats, bias, or stupidity. Wealth, health, knowledge make you more free. So for yourself, you can determine how much free will you have by looking at your will and seeing how free it is. Can you, as Eliezer put it, "win"?
I define free will by combining these two definitions. A cleptomaniac is a prisoner of his own body. A man who can be scared into not stealing is free to a degree. A man who can swiftly and perfetly adapt to any situation, whether it prohibits stealing, requires it, or allows it, is almost free. A man becomes truly free when he retains the former abilities, and is allowed to steal, AND has the power to change the situation any way he wants.
Quantum magic isn't free will, it's magic.
Replies from: cousin_it↑ comment by cousin_it · 2018-01-27T09:36:08.536Z · LW(p) · GW(p)
What a beautiful comment!
Every once in a while I wonder if something like Eliezer's Lawful Creativity is true - that creativity can be reduced to following rules. And then I come across something like your comment, where a non-obvious "jump" leads to a clearly true conclusion. For humans trying to create new stuff, practicing such "jumps" is at least as important as learning the rules.
comment by Unknown · 2008-03-09T19:33:15.000Z · LW(p) · GW(p)
"The nice thing about the second question is that it is guaranteed to have a real answer, whether or not there is any such thing as free will."
Who guaranteed this?
The claim that every fact, such as someone's belief, has a definite cause, is a very metaphysical claim that Eliezer has not yet established.
comment by randomwalker · 2008-03-09T20:52:13.000Z · LW(p) · GW(p)
The problem with this blog is that you occasionally say amazingly insightful things but the majority of your posts, like this one, say something blindingly obvious in a painfully verbose way. But then it could be that some of the things that are amazingly insightful to me are blindingly obvious to someone else, and vice versa. Oh well.
Replies from: jwoodward48↑ comment by jwoodward48 · 2017-03-02T23:45:11.558Z · LW(p) · GW(p)
The real problem is that these things are not blindingly obvious to everyone. LW is a means of fixing this, at least for its target audience.
comment by DonGeddis · 2008-03-09T21:31:04.000Z · LW(p) · GW(p)
I'll give (a few of them) a shot.
"Why do I think I have free will?" There seem to be two categories of things out there in the world: things whose behavior is easily modeled and thus predictable; and things whose internal structure is opaque (to pre-scientific people) and are best predicted by taking an "intensional stance" (beliefs, desires, goals, etc.). So I build a bridge, and put a weight on it, and wonder whether the bridge will fall down. It's pretty clearly the case that there's some limit of weight, and if I'm below that weight -- whether I use feathers or rocks -- the bridge will stay up; otherwise it will collapse. Very simple model, reasonably accurate.
In contrast, if I ask my officemate to borrow his pen, he may or may not give it to me. Trying to predict whether he will is impossible to do precisely, but responds best (for laypeople) to a model with beliefs, goals, memories, etc. Maybe he's usually helpful, and so will give me the pen. Maybe I made fun of his shirt color yesterday, and he remembers, and is angry with me, and so won't.
This "intensional stance" model requires some homunculus in there to "make a decision". It can decide to take whatever action it wants. I can't make it do anything (in constrast to a bridge, which doesn't "want" anything, and responds to my desires).
This is the theory element that gets labeled as "free will". It's that intensional actors appears to be able to do any action that they "want" or "decide" to do. That's part of the theory of predicting their future actions.
So, why do humans have free will but computers don't? Because most computers have behavior that is far easier to understand than human behavior, and no predictive value is gained by adopting the intensional stance towards them.
comment by DonGeddis · 2008-03-09T21:46:54.000Z · LW(p) · GW(p)
"Why do I think time moves forward instead of backward?"
Basically, because of entropy.
There are actually two questions here: first, why does time (appear to) flow at all? And second, why does it flow only forwards?
If the whole universe were composed only of a single particle, say a photon, you couldn't even notice time passing. Every moment would be identical to every other moment. Time wouldn't even flow.
So first you need multiple entities, in order to have change. So now let's say you had the same single photon, bouncing forever between two parallel mirrors. Now time would flow (you could watch a movie of the photon, and notice changes from frame to frame). But it wouldn't particularly flow forwards or backwards. If someone gave you a movie of the bouncing photon, but it wasn't labeled which side was the start and which the end, you'd have no way to tell. There isn't really a "forward" or "backward" in time in that situation.
So what it takes is a complex universe, with order and chaos. And then it's just a matter of probabilities. Eggs are vastly more likely to scramble than to descramble; shattered cups rarely bounce off the floor and spontaneously reassemble; etc. The laws of physics don't prevent these things. They're just exceedingly unlikely. So if you had an unlabeled film, you could tell which side was the "past" and which the "future", since in one direction all the action was extremely probable, while in the other direction every action is exceedingly unlikely.
So, in our normal, macroscopic world, we imagine an arrow of time, a past we can never change, and a future that can be altered by our free will.
Even though relativity tells us that the REAL universe doesn't have absolutely reference frames, that time passes differently in different frames, that it doesn't even make sense to ask whether two events apart in space are even simultaneous or not, that time doesn't really mean anything "before" the big bang or inside a black hole, that really the whole evolution of the universe is a single fixed state vector of space-time, and time never flows at all.
But a (false) concept of linear time with a fixed past and a changable future, helps us quickly make useful decisions in our typical lives.
Replies from: jwoodward48↑ comment by jwoodward48 · 2017-03-02T23:45:58.810Z · LW(p) · GW(p)
Not entropy, but rather causation; time does not flow backwards because what I do tomorrow will not affect what I did yesterday.
comment by DonGeddis · 2008-03-09T22:02:48.000Z · LW(p) · GW(p)
"Why do I think I was born as myself rather than someone else?"
So we adopt the intensional stance towards other humans. We imagine they have some "deciding" homunculus inside them, that makes choices. We don't know how it works or any of its internal structures, but it is influenced by beliefs, desires, memories, etc.
We know that much can change about the "mere body", while the homunculus seems the same. We age over decades. We lose a limb or eyesight in an accident. We get a heart transplant from a cadaver. We learn to drive a car, and "become one" with the vehicle. It appears that much can change with these external things, but we see both in ourselves (and our memories of our younger selves), and in the self-reported feelings of others, that despite drastic changes the core "us" is relatively unchanged. We see lifelong criminals attempt to "turn over a new leaf", and yet at some point their core surfaces again, apparently never having changed at all; only the surface facade was different.
So it's not at all unreasonable (or so it seems) to suppose that the mysterious and apparently-constant "I" (perhaps a soul of some kind) could have arrived much the same, but in drastically different circumstances. Perhaps 1000 years ago. Or black instead of white. Or a king instead of a peasant.
Hence it MUST have been just a random lottery that caused me to be "born as myself, rather than someone else."
(The truth, of course, is that we're "nothing more" than the product of our genetics plus our environmental history. If our circumstances had been different, we wouldn't have been the same person. For that matter, you aren't the same person today that you were a decade ago, despite your illusion that there is some core "I" that is conserved.)
Replies from: jwoodward48↑ comment by jwoodward48 · 2017-03-02T23:46:58.026Z · LW(p) · GW(p)
So in a nutshell, you aren't someone else because then you wouldn't be you, correct? :P
comment by DonGeddis · 2008-03-09T22:08:23.000Z · LW(p) · GW(p)
"Why do I think reality exists?"
We could well be in a matrix world, with all an illusion. Or, perhaps we arrived just a moment ago, but intact with false implanted memories. (Sort of like the creationist explanation of evidence for evolution.)
The assumption that "reality exists" is mere convenience. It's helpful in order to predict my future observations (or so my current memory suggests to me). Even if this is a matrix world, there is still the EXACT SAME theory of "reality", which would then be used to predict the future illusions that I'll notice.
Replies from: rasthedestroyer100↑ comment by rasthedestroyer100 · 2016-06-30T11:49:28.367Z · LW(p) · GW(p)
The existence of 'reality' is just a logically circular argument of the form: 'what is real exists because it really exists.' There is no reason to prove the existence of reality; we prove or disprove the existence (or non-existence) of things in reality. We are able to falsify the existence of things by this method due to the satisfied precondition of a reality in which things exist,
Of course we could be experiencing some manufactured illusion, but this still necessarily implies some reality in which this illusion can be constructed. Our ability to experience this illusion would suffice to prove that we are real, since all experience is that of an experiencing being or object. This objective experiencing being must exist in relation to some other second existent object of experience. But then we must posit a third object in relation to which both of these two - the object of experience and the experiencing object - are experienced in turn, and so on. The notion of a purely subjective idea-being without an objective reality or existence is absurd - an idea or sensation not inhering in any object in reality has no body in which the subjective being of even an illusion or misapprehension arises. To be aware is to be aware of something.
In fact, the very notion that we could or are living in a false reality to which our minds are inextricably enslaved is more or less religious superstition: all existence as we experience it would require a designer or predetermined purpose who constructs this illusion, all of the sensory objects in this subterfuge, and all of the laws of nature to which the non-existent hallucinations are consistently obedient that this designer must have arbitrarily laid out in advance.
comment by Kaj_Sotala · 2008-03-09T22:29:40.000Z · LW(p) · GW(p)
The beauty of this method is that it works whether or not the question is confused.
I have to admit, to me the "Why do I think I was born as myself rather than someone else" example seems so confused that I'm having difficulty even parsing the question well enough to apply the method.
comment by Tiiba2 · 2008-03-09T23:05:11.000Z · LW(p) · GW(p)
"Why do I think I was born as myself rather than someone else?"
Because a=a?
Replies from: rasthedestroyer100↑ comment by rasthedestroyer100 · 2016-06-30T11:50:27.843Z · LW(p) · GW(p)
Why does a = a?
Replies from: jwoodward48↑ comment by jwoodward48 · 2017-03-02T23:48:14.148Z · LW(p) · GW(p)
Is that a serious question? It's a basic axiom of mathematics, and part of the standard definition of "equals". (And if a = b and b = c, then a = c, for example.)
comment by TGGP4 · 2008-03-09T23:33:14.000Z · LW(p) · GW(p)
Tiiba, you might be interested in For the law, neuroscience changes nothing and everything.
comment by Ron_Hardin · 2008-03-10T01:14:45.000Z · LW(p) · GW(p)
A=A is not a tautology.
Usually the first A is taken broadly and the second A narrowly.
The second, as they say, carries a pregnancy.
Replies from: RickJS↑ comment by RickJS · 2009-09-12T02:36:37.189Z · LW(p) · GW(p)
META: thread parser failed?
It sounds like these posts should have been a sub-thread instead of all being attached to the original article?:
09 March 2008 11:05:11PM
09 March 2008 11:33:14PM
10 March 2008 01:14:45AM
Also, see the mitchell porter2 - Z. M. Davis - Frank Hirsch - James Blair - Unknown discussion below.
Replies from: Z_M_Daviscomment by Tom_Breton · 2008-03-10T01:42:25.000Z · LW(p) · GW(p)
This seems to me a special case of asking "What actually is the phenomenon to be explained?" In the case of free will, or should I say in the case of the free will question, the phenomenon is the perception or the impression of having it. (Other phenomena may be relevant too, like observations of other people making choices between alternatives).
In the case of the socks, the phenomenon to be explained can be safely taken to be the sock-wearing state itself. Though as Eliezer correctly points out, you can start farther back, that is, you can start with the phenomenon that you think you're wearing socks and ask about it and work your way towards the other.
comment by mitchell_porter2 · 2008-03-10T02:16:11.000Z · LW(p) · GW(p)
It looks like the basic recipe for complacency being offered here is:
Something mysterious = Thoughts about something mysterious = Thoughts = Computation = Matter doing stuff = Something we know how to understand.
But if you really follow this procedure, you will eventually end up having to relate a subjective fact like "being a self" or "seeing blue" to a physical fact like "having a brain" or "signalling my visual cortex".
It seems that most materialists about the mind have a personal system of associations, between mental states and physical states, which they are happy to treat as identities (e.g. mental process X is physical process X', "from the inside"), and which are employed when they need to be able to interpret their own experience and their own thinking in material terms.
If you keep asking why, you will need to justify these alleged identities. In fact, if you really keep asking why, in my experience the identities appear untenable and based on a crude and radically incomplete description of the subjective facts, and you end up being interested in metaphysics, from both sides, material and mental.
comment by Z._M._Davis · 2008-03-10T04:16:50.000Z · LW(p) · GW(p)
Mitchell, what reason is there to think that materialism is false, other than our not-understanding exactly how mental events arise from physical ones? A lot of science has been done about the brain; we know that at least there is a very, very intimate connection between mental events and physical brain events. To me, it seems much more parsimonious to suppose that there really is a (not yet fully understood) identity between mental process X and physical process X, than to say that mental process X is actually occurring in some extraphysical realm even though it always syncs up in realtime with physical process X.
comment by mitchell_porter2 · 2008-03-10T06:27:29.000Z · LW(p) · GW(p)
Z. M., let me answer you indirectly. The working hypothesis I arrived at, after a long period of time, was a sort of monadology. Most monads have simple states, but there is (one hypothesizes) a physics of monadic interaction which can bring a monad into a highly complex state. From the perspective of our current physics, an individual monad is something like an irreducible tensor factor in an entangled quantum state. The conscious self is a single monad; conscious experience is showing us something of its actual nature; any purely mathematical description, such as physics presently provides, is just formal and falls short of the truth.
Now all that may or may not be true. As far as I am concerned, thinking in terms of monads has one enormous advantage, and that is that there is no need to falsify one's own phenomenology in order to fit it to a neurophysical apriori the way that, say, Dennett does. Dennett dismisses phenomenal color and the subjective unity of experience as "figment" and "the Cartesian theater", respectively, and I'm sure he does so because there is indeed no color in a billiard-ball materialism, and no Cartesian theater in a connectionist network. But for the neo-monadologist, because consciousness is being mapped onto the state of a single monad, the ontological mismatch does not arise. We will have a formal physics of monads, described mathematically, and then the fully enriched ontology of the individual monad, to be inferred from conscious phenomenology, and there is no need to convince yourself that you are actually a collection of atoms or a collection of neurons.
The downside is that there had better be a very high-dimensional coherent quantum subsystem of the brain which is physically and functionally situated so as to play the role of Cartesian theater, or else it's back to the theoretical drawing board.
But having dreamed up all of that, what do I see when I look at current attempts to understand the mind? The subjective facts are only crudely understood; and then they are further falsified and dumbed-down to fit the neurophysical apriori; but people believe this because they think the only alternative is superstition and dualism. It's certainly a lot easier to see it so starkly, when you have an alternative, but nonetheless it is possible to sense that something is going wrong even when you don't have the alternative. And that is why I object to this happy process of dissolving one's metaphysical questions in cognitive materialism. It is simply an invitation to deceive oneself in all those areas where physics-as-we-know-it is inherently incapable of giving an answer. Better to maintain the tension of not knowing, and maybe think of something new as a result.
comment by Frank_Hirsch · 2008-03-10T07:17:04.000Z · LW(p) · GW(p)
James Blair: I've read JH's "On Intelligence" and find him overrated. He happens to be well known, but I have yet to see his results beating other people's results. Pretty theories are fine with me, but ultimately results must count.
comment by Z._M._Davis · 2008-03-10T07:49:49.000Z · LW(p) · GW(p)
Mitchell, I think it's far too early to give up on the materialist program, which has so far been a smashing success. Consciousness is (as it is said) a hard problem, but even if no one ever finds a solution, one might at least first give solemn consideration to the possibility (I forget exactly where I read it proposed--Hofstadter?) that humans are just too stupid figure out the answer, before vindicating Leibniz.
comment by James_Blair · 2008-03-10T09:00:04.000Z · LW(p) · GW(p)
Frank, what does that have to do with the quality of the paper I linked?
comment by mitchell_porter2 · 2008-03-10T10:37:40.000Z · LW(p) · GW(p)
Z. M., "my" monads aren't much like Leibniz's. For one thing, they interact. It could even be called a psychophysical identity theory, it's just that the mind is identified with a single elementary entity (one monad with many degrees of freedom) rather than with a spatial aggregate of elementary entities (a monadic self will still have "parts" in some sense, but they won't be spatial parts). I suppose my insistence that physical ontology should be derived from phenomenological ontology, rather than vice versa, might also seem anti-materialist. (What I mean by this: In fundamental physics, the states of things are known by sets of numerical labels whose meaning is totally relative. All we know from the equation is that cause X turns state A into state B. It tells us nothing about state A in itself. But phenomenology offers us a direct glimpse of something, as Psy-Kosh struggles to express, a few comments back. At some level, it is what we have to work with and it is all we have to work with.) But the main thing is to get away from the assumptions of the neurophysical apriori, because they are inhibiting and distorting what passes for phenomenology today. The description of consciousness is probably best pursued in the almost-solipsistic frame of mind described by Husserl, in which one suspends the question of whether things actually exist, and focuses on the states of consciousness which somehow constitute their appearance. Being able to entertain the possibility of idealism is very conducive to this.
If (let us say) the brain really does have a functionally consequential coherent quantum subsystem, a sharply defined physical entity which really-and-truly is the self, and whose states are literally our states of consciousness, I would expect materialistically pursued neuroscience to eventually figure it out, because neuroscience does include the search for correlations between subjective experience and the physical reality. (Though if it were true, it might save a few years to have the hypothesis already out there in the literature, rather than having to wait for it to become screamingly obvious.) The same may go for whatever other unorthodox possibilities I haven't thought of. It is true that I am ready to give up right now on all existing materialist theories of consciousness; they are manifestly unable to explain even what color is.
So scientifically, I make a noise in favor of metaphysics because I think that will get us to the truth faster. Unfortunately, I doubt I can do the argument justice in off-the-cuff blog comments. I will just have to make an effort and write something longer. The other thing that worries me is the conjunction of information technology with antimetaphysical theories of the mind. There's even less of a reality check there than in neuroscience, when it comes to the attribution of mental properties. But that's a whole other topic.
comment by Unknown · 2008-03-10T19:14:03.000Z · LW(p) · GW(p)
If Eliezer has his way, consciousness is not a "hard problem" at all, since asking why people are conscious is the same as asking "why do people think they are conscious," while "thinking one is conscious" is identified with a physical state of one's brain.
The reason Eliezer cannot have his way is that the identity or non-identity of physical and mental reality is irrelevant to explanation. For example, presumably light of different colors is identical to light of different wavelengths. But if I ask, "why does that light look red," it is NOT a sufficient explanation to say that the light has a certain wavelength, NOR to say that my brain reacts to this wavelength in such and such a physical way. It is easy to see that the explanation is insufficient because given the explanation (info about wavelengths and brain states), one would not be able to draw the conclusion that the light would look red, unless one is given the info that a certain brain state is equivalent to seeing red. But this is the point: why is this brain state equivalent to seeing red? The question, "Why do I think that this brain statement is equivalent to seeing red," is now not helpful at all, because presumably the reason I think they are identical is because they are identical. But why are they identical? This is just what has not been explained, and cannot be explained. So there is an actually unanswerable question (at least as far as anyone knows, by any concepts anyone has yet conceived of), and it is not a meaningless question.
Replies from: rasthedestroyer100↑ comment by rasthedestroyer100 · 2016-06-30T11:54:16.912Z · LW(p) · GW(p)
"For example, presumably light of different colors is identical to light of different wavelengths."
More specifically, lights of identical wavelengths have identical colors, and vice-versa. Clearly, "waves = colors" is not a valid statement of equality ('color' is an epiphenomenon of wavelengths arising as a percept in a sensory being, while the wavelengths the mind converts into colors exists independently of any observers). A wave is a wave, and a color is a color, and these two properties have a direct relationship upon which the equality or inequality of these properties in some group of objects can be ascertained.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-03-10T20:08:35.000Z · LW(p) · GW(p)
Mitchell, Unknown, I worry you may have misunderstood the point.
The question "Why am I conscious?" is not meant to be isomorphic to the question "Why do I think I'm conscious?" It's just that the latter question is guaranteed to be answerable, whether or not the first question contains an inherent confusion; and that the second question, if fully answered, is guaranteed to contain whatever information you were hoping to get out of the first question.
"Explain" is a recursive option - whenever you find an answer, you can hit "Explain" again, unless you hit "Worship" or "Ignore" instead. If the answer to "Why do I think I'm conscious?" is "Because I'm conscious"; and you can show that this is true evidence (that is, you would not think you were conscious if you were not conscious); and you carry out this demonstration without reference to any mysterious concepts (i.e., "Because I directly experience qualia!" contains four mysterious concepts, not counting "Because"); then you could hit the "Explain" button again regarding "Because I'm conscious."
The point is that by starting with a belief, you start with an unconfused thing - the belief may be about something confused, but the belief itself is just a cognitive object sitting there in your mind. Even if its meaning is self-contradictory, the representation is just a representation. "This sentence is false" is paradoxical when you try to interpret it, but there is nothing paradoxical about writing four English words between quote marks, it happens all the time.
If you're asking "Why is the sentence 'This sentence is false' both true and false?" you'll end up confused, because you dereferenced it in the question, and the referent is self-contradictory. Ask "Why do I think the sentence 'This sentence is false' is both true and false?" and you'll be able to see how your mind, as an interpreter, goes into an infinite loop - suggesting that not every syntactical English sentence refers to a proposition.
By starting with a belief, un-derefenced, inside quote marks, you start with an unconfused thing - a cognitive representation. Then you keep tracing back the chain of causality until you arrive at something confusing. Then you unconfuse it. Then you keep tracing.
It really does help to start with something unconfused.
Unknown said: So there is an actually unanswerable question (at least as far as anyone knows, by any concepts anyone has yet conceived of), and it is not a meaningless question.
1) No one knows what science doesn't know.
2) Perhaps you should ask "Why do I think this question is unanswerable?" rather than "Why is this question unanswerable?"
Replies from: rasthedestroyer100↑ comment by rasthedestroyer100 · 2016-06-30T12:02:57.219Z · LW(p) · GW(p)
"No one knows what science doesn't know."
This sort of anthropomorphic bias leads to conceptual errors. 'Science' is the method of acquiring knowledge and the collection of acquired knowledge to which the method is rigorously applied. It is incapable of knowing anything independently of what individuals know; in fact, it can't know anything at all without some knowing individual to practice it. And to be sure, we can know things 'science doesn't know': we know we are in love, that we are happy or sad, that we played baseball for the first time when we were 6 years old at the park in Glens Falls, etc.
Replies from: Morendilcomment by sonic2 · 2008-03-10T20:16:32.000Z · LW(p) · GW(p)
"Why do I think I have free will?" "Because I do," is a perfectly good answer (assuming free will). Trying to trace that back to anything is question begging. "Why do I think I need to deny my free will?" Wouldn't I need to ask that question to be sure that my original answer isn't based on bias?
Replies from: rasthedestroyer100↑ comment by rasthedestroyer100 · 2016-06-30T12:06:27.954Z · LW(p) · GW(p)
""Why do I think I have free will?" "Because I do," is a perfectly good answer (assuming free will)."
Disagree. Notice how this answer is only 'good' assuming free will. But our assumption of free will is exactly what we are seeking to understand the cause of. We can assume free will is correct and that this is adequate to justify our answer ('because I do'), but then we have only re-posited the assumption in the consequent.
comment by Nick_Tarleton · 2008-03-10T20:46:56.000Z · LW(p) · GW(p)
Since our introspection ability is so limited, this method sounds like it could easily end up resulting in, not the correct explanation of the belief and explanation-away of the phenomenon, but a just-so story that claims to explain away something that might actually exist. This is not a Fully General Counterargument; a well-supported explanation of the belief is probably right, but more support is needed than the conjecture. Look how many candidate explanations have been offered for belief in free will.
comment by mitchell_porter2 · 2008-03-11T02:58:33.000Z · LW(p) · GW(p)
Eliezer, in the last few posts you have proposed a method for determining whether a question is confused (namely, ask why you're asking it), and then a method for getting over any sense of confusion which may linger even after a question is exposed as confused ("understand in detail how your brain generates the feeling of the question"). The first step is reasonable, though I'd think that part of its utility is merely that it encourages you to analyse your concepts for consistency. As for the second step, I do not recall experiencing this particular form of residual confusion; if I'm analysing a question and I still feel confused, I would think it was because I was not finished with the analysis.
So what's my problem? The issue is whether this procedure helps in any way with the answering of philosophical or metaphysical questions. I can see the first step leading to (1) a double-check that your concepts make sense (2) attention to epistemic issues. (1) is OK. But (2) is certainly a place where presuppositions can insert themselves. Suppose I'm asking myself "Why are there no positive integers a, b, c, n, with n > 2, such that a^n + b^n = c^n?" If I go reflexive and instead ask "Why do I think there is no such set of integers?", I might notice that this is merely an inductive generalization on my part, from the observed fact that no-one has ever found such a set of integers. And then, if I have a particular epistemology, I might say "But inductive generalizations can never be proved, and so my original question is pointless, because I will never know if there are indeed no such sets, short of being lucky enough to find a counterexample!" And maybe I'll throw in a personal confusionectomy just to finish the job; and the result would be that I never get to discover Wiles's proof of the theorem.
It is a somewhat silly example, but I would think that it illustrates a real hazard, namely the use of this procedure to rationalize rather than to explain.
comment by Ben_Jones · 2008-03-11T10:17:35.000Z · LW(p) · GW(p)
I think the confusion may have arisen from the incongruous title to this post. Inserting 'Why do I believe...' before your query is an excellent heuristic, but you can't right a wrong question. You can only get better at recognising them.
comment by Frank_Hirsch · 2008-03-11T13:20:53.000Z · LW(p) · GW(p)
Frank, what does that have to do with the quality of the paper I linked?
James, everything. The paper looks very much like the book in a nutshell plus an actual experiment. What does the paper have to do with "And I know you didn't simply leave out an explanation that exists somewhere, because such understanding would probably mean a solution for the captcha problem."? I find these 13 and 12 year old papers more exciting. And here is some practical image recognition (although no general captcha) stuff.
comment by Amanojack · 2010-03-12T22:58:05.882Z · LW(p) · GW(p)
Nice post, and great method.
On free will, I'd like to pose a question to anyone interested: What do you think it would feel like not to have free will?
(Or, what do you think it would feel like to not think you have free will?)
Replies from: Rain, JGWeissman, FAWS, AndyCossyleon, TheOtherDave↑ comment by Rain · 2010-03-12T23:05:58.254Z · LW(p) · GW(p)
The only consistent way I can think of existing in a form without free will would be as a "prisoner" in my body: a mind that is capable of thinking and learning from the information presented to it by the senses, but unable to alter it in any way, the arms and body moving without the consent of the conscious mind.
Replies from: Amanojack↑ comment by JGWeissman · 2010-03-12T23:23:28.850Z · LW(p) · GW(p)
If my actions were not correlated to my desires and my earlier resolutions, this would feel like not having free will. Weak correlation would feel like diminished free will.
Replies from: Amanojack↑ comment by Amanojack · 2010-03-14T02:34:15.982Z · LW(p) · GW(p)
Weak correlation sounds like akrasia. In this interpretation of free will, the difference between wanting and liking might then say that 100% free will is impossible.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-03-15T04:21:24.422Z · LW(p) · GW(p)
Here is an example of what I am talking about that happened yesterday. I was staying with friends, and in the morning I went to take a shower. So I gathered the clothes I would put on afterwards, and my towel. But when I got into bathroom, I found that instead of the towel, I had my sweater which had been on the shelf above where the towel was hanging, which I apparently grabbed instead. This felt like not having free will.
Replies from: Amanojack↑ comment by Amanojack · 2010-03-16T05:15:34.047Z · LW(p) · GW(p)
It sounds like you trusted the judgment of your earlier self (or a subconscious subroutine) to have grabbed the right item, but there was a glitch. This reminds me of those dreams where it's a given that "you" have already made a major decision in the dream, but it happened in the past (before you entered the dream world) so you had no control over it. That's one terrible feeling, if the decision was a bad one.
↑ comment by FAWS · 2010-03-13T00:44:36.592Z · LW(p) · GW(p)
I don't think I ever had this confused concept of free will. That is thinking that the future of my actions is undetermined until I make a decision or that my actions are governed by anything other than normal physics never made any sense to me at all.
To me possessing a free will means being in principle capable of being the causal bottleneck of my decisions other than through pure chance.
Making a decision means caching the result of a mental calculation about whether to take a certain course of action (which in humans has the strong psychological consequence of affirming that result).
Being the causal bottleneck is much more difficult to define than I thought when I started this post, but it involves comparing what sort of change to me would result in a different decision to what sort of changes to the rest of the world would result in the same.
The only ways I could see not having a free will would be either not being able to make decisions at all, or not being able to make decisions unless under the influence of something else that is itself the causal bottleneck of the decision, and which is not part of me. I can't see how the second could be the case without some sort of puppet master (and there has to be some reason against concluding that this puppet master is the real me), but it's not obvious why being under the control of the puppet master would feel any different.
Replies from: Amanojack↑ comment by Amanojack · 2010-03-14T02:46:10.663Z · LW(p) · GW(p)
it's not obvious why being under the control of the puppet master would feel any different.
This is essentially why I posed the question. Anyone who believes they do have free will or is disturbed by the idea that they don't, ought to be able to say what (at least they think) would feel different without it.
I posit that if such a person tries to describe how they think "lack of free will" would feel, either they won't be able to do it, or what they describe will be something obviously different from human experience (thereby implicitly redefining "free will" as something non-controversial).
Replies from: FAWS↑ comment by FAWS · 2010-03-14T02:52:52.318Z · LW(p) · GW(p)
I think Occam's razor is reason enough to disbelieve the puppet master scenario. I'd readily admit that my idea of free will might be something entirely non-controversial. And i don't have any problem with the idea that some currently existing machines might already have free will according to my definition (and for others the puppet master scenario is essentially true).
Replies from: Amanojack↑ comment by AndyCossyleon · 2010-08-04T18:10:11.712Z · LW(p) · GW(p)
.
↑ comment by TheOtherDave · 2010-11-04T04:05:52.715Z · LW(p) · GW(p)
During the first month or so after my stroke, while my nervous system was busily rewiring itself, I experienced all sorts of transient proprioceptic illusions.
One of them amounted to the absence of the feeling of free will... I experienced my arm as doing things that seemed purposeful from the outside, but for which I was aware of no corresponding purpose.
For example, I ate breakfast one morning without experiencing control over my arm. It fed me, just like it always had, but I didn't feel like I was in control of it.
To give you an idea of how odd this was: at one point my arm put down the food item it was holding to my mouth, and I lay there somewhat puzzled... why wasn't my arm letting me finish it? Then it picked up a juice carton and brought it to my mouth, and I thought "Oh! It wants me to drink something... yeah, that makes sense."
It was a creepy experience, somewhat ameliorated by the fact that I could "take control" if I chose to... letting my arm feed me breakfast was a deliberate choice, I was curious about what would happen.
I think that's what it feels like to not experience myself as having free will, which is I think close enough to your second question.
As for your first question... I think it would feel very much like the way I feel right now.
Replies from: lukeprog, Amanojack, Anubhav↑ comment by lukeprog · 2011-02-02T06:33:00.425Z · LW(p) · GW(p)
That is creepy as hell.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-07-05T06:07:35.234Z · LW(p) · GW(p)
Heh. You're telling me? ;-)
↑ comment by Amanojack · 2011-04-27T17:49:55.869Z · LW(p) · GW(p)
Fascinating!
It felt like you couldn't control yourself, but which one of you (two) was really "yourself"? English usually refers to people and minds in the singular, but my mind feels more like a committee. Maybe the stroke drove more a wedge between the committee members than usual.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-04-27T17:59:57.088Z · LW(p) · GW(p)
In this particular case, I don't think so.
I mean, we can go down the rabbit hole about what constitutes a "self," but in pragmatic terms, everything involved in making decisions seemed to be more or less aligned and coordinating as well as it ever does... what was missing was that I didn't have any awareness of it as coordinated.
In other words, it wasn't like my arm was going off and doing stuff that I had no idea why it was doing; rather, it was doing exactly what I would have made it do in the first place... I just didn't have any awareness of actually making it do so.
That said, the more extremely disjointed version does happen... google "alien hand syndrome."
Replies from: shminux↑ comment by Shmi (shminux) · 2012-02-22T06:06:34.449Z · LW(p) · GW(p)
I'd say that you felt that you had free will, along with more severe problems expressing it than usual. I'm guessing that paranoid schizophrenics obeying voices telling them to do things is a better example of a feeling of not having free will.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-02-22T16:02:52.708Z · LW(p) · GW(p)
Not to mention ordinary people who happen to have guns pointed to their heads.
↑ comment by Anubhav · 2012-02-22T08:01:17.632Z · LW(p) · GW(p)
Sounds to me like the left-brain interpreter experiencing lag.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-02-22T16:00:35.705Z · LW(p) · GW(p)
Yeah, that's more or less how I interpreted it... not so much lag, precisely, as a failure to synchronize. There were lots of weird neural effects that turned up during that time that, on consideration, seemed to basically be timing/synchronization failures, whcih makese a lot of sense if various parts of my brain were changing the speed with which they did things as the brain damage healed and the swelling went down.
Of course, it's one thing to know intellectually that my superficially coherent worldview is the result of careful stitching together of outputs from independent modules operating at different rates on different inputs; it's quite another thing to actually experience that coherency breaking down.
comment by Perplexed · 2010-07-29T23:26:44.624Z · LW(p) · GW(p)
"Why do I think I have free will?"
One answer might go like this: "But I don't think that. If I use W to denote the proposition that I have free will, I can think of no experiments whose results might provide evidence for or against W. I don't assign a high subjective probability to (W|). For any other proposition Y, I don't see any difference between the probability of (Y|W) compared to (Y|~W)".
"Nevertheless I choose to assume W because I often find it easier to estimate P(Y|W) than to directly estimate P(Y), especially when I can influence P(Y) by an act of 'will'."
A belief doesn't have to be useful to be valid; an assumption doesn't have to be true to be useful.
comment by orthonormal · 2010-08-12T16:19:13.802Z · LW(p) · GW(p)
(Note: this comment is a reply to this comment. Sorry for any confusion.)
Sereboi, I think once again we're miscommunicating. You seem to think I'm looking for a compromise between free will and determinism, no matter how much I deny this. Let me try an analogy (stolen from Good and Real).
When you look in a mirror, it appears to swap left and right, but not up and down; yet the equations that govern reflection are entirely symmetric: there shouldn't be a distinction.
Now, you can simply make that second point, but then a person looking at a mirror remains confused, because it obviously is swapping left and right rather than up and down. You can say that's just an illusion, but that doesn't bring any further enlightenment.
But if you actually ask the question "Why does a mirror appear to switch left and right, by human perception?" then you can make some progress. Eventually you come to the idea that it actually reverses front and back, and that the brain still looks to interpret a reflected image as a physical object, and that the way it finds to do this is imagining stepping into the mirror and then turning around, at which point left and right are reversed. But it's just as valid to step into the mirror and do a handstand, at which point top and bottom are reversed; it's just that human beings are more bilaterally symmetric than up-down, so this version doesn't occur to us.
Anyway, the point is that you learn more deeply by confronting this question than by just stopping at "oh, it's an illusion", but that the mathematical principle is in no way undermined by the solution.
The argument I'm making is that the same thing carries through in the free will and determinism confusion. By looking at why it feels like we have choices between several actions, any of which it feels like we could do, we learn about what it means for a deterministic algorithm to make choices.
I don't know whether this question interests you at all, but I hope you'll accept that I'm not trying to weaken determinism!
Replies from: sereboi↑ comment by sereboi · 2010-08-13T05:30:39.132Z · LW(p) · GW(p)
This makes sense, somewhat and now that i realize your not trying to defend compatibilism and can shift gears a bit. I really think that the whole situation might just be a veridical paradox, both being true equally. So in a way i would like to concede to compatibilism, however compatabilist attempts at solving the paradox are pathetic. Not sure if you have heard of Dialetheism, its a growing western philosophy that recognizes true contradictions. If compatibilism is a true contradiction than there will never be an explanation for how it works. It will just have to be accepted as such. The problem for most rationalists is that it takes the wind out of their sails. Also who decides something is a veridical paradox? Graham Priest has several books on the topic which challenges Aristotle's Law of Non Contradiction which is what we base most western debate off of. Perhaps it is time to start rethinking the wheel of some rational solutions..
here is a WIKI link to read more on dialethesim
http://en.wikipedia.org/wiki/Dialetheism
Replies from: orthonormal, Leonhart↑ comment by orthonormal · 2010-08-13T15:44:52.472Z · LW(p) · GW(p)
Well, I wouldn't give up that easily! The default assumption should be that there's an underlying consistent reality, that paradoxes are in the map, not the territory (as was the case with the simple "mirror paradox" above). Assuming that an apparent contradiction is fundamental ought to be the last resort.
Think about free will for a while— focusing on what the act of choosing feels like, and also on what it might actually consist of— and then check Eliezer's proffered resolution. It's much less naive than you're expecting.
Replies from: sereboi, sereboi↑ comment by sereboi · 2010-08-13T21:53:45.268Z · LW(p) · GW(p)
so i am starting to finally get the dogma of this community, correct me if i'm wrong but this is basically a Reductionist site, right?
Eliezer said: "Since free will is about as easy as a philosophical problem in reductionism can get"
to me reductionism does not make sense at solving ALL problems, perhaps i'm too dumb to get it. The problem of Free will Vrs Determinism has baffled philosophers for a long time. Calling it a veridical paradox might seem like a capitulation. For me it's about the only thing that makes any kind of real sense.
I also get the feeling that this community loves to talk in circles and never really get anywhere, like the whole fun of it is just talking forever and presenting endless scenario's. Thats not my bag. Im NOT saying i'm right. Built im defiantly not into intellectual masturbation. I have asked repeatedly for substantial evidence and have gotten only subjective reasoning delivered in analogies.
Thanks to everyone for you time responding to my questions. Believe me my intent is not to bash you guys. Its just not for me.
Chow
↑ comment by sereboi · 2010-08-13T22:03:48.951Z · LW(p) · GW(p)
so i am starting to finally get the dogma of this community, correct me if i'm wrong but this is basically a Reductionist site, right?
Eliezer said: "Since free will is about as easy as a philosophical problem in reductionism can get"
Reductionism does not make sense at solving ALL problems, perhaps i'm too dumb to get it. The problem of Free will Vrs Determinism has baffled philosophers for a long time. Calling it a veridical paradox might seem like a capitulation but it's about the only thing that makes any kind of real sense. The problem is most rationalists can't accept that., like paradox's have to be solved.
I also get the feeling that this community enjoys talking in circles and never really getting anywhere, like the whole fun of it is just discussing forever and presenting endless scenario's. Thats not my bag. Im NOT saying i'm right, but im defiantly not into intellectual masturbation.
I have asked repeatedly for substantial evidence and have only gotten subjective reasoning delivered in analogies.
Thanks to everyone for you time responding to my questions. Believe me my intent is not to bash you guys. Its just not for me.
-10 for me. i know, i know.
Chow
Replies from: orthonormal↑ comment by orthonormal · 2010-08-13T22:26:02.829Z · LW(p) · GW(p)
Well, you may or may not be interested in the site; that's up to you. I do want to point out that the reason I haven't tried to explain except by analogy is that a good explanation of a slippery problem (like a reductionistic account of choice) takes a while to read, and longer to write. I did link it for you if you're curious.
comment by Ronny Fernandez (ronny-fernandez) · 2011-09-20T19:10:17.194Z · LW(p) · GW(p)
If we ask "why does reality exist instead of not exist?", it's like asking "why does existence exist instead of not exist?". Well that's because it's existence. That which is or is a part of reality, is what exists. Something being a part of reality or reality itself is a sufficient condition for that thing existing. So of course reality exists, it's the base case.
A more complicated question is "why is existence like this as apposed to some other way?". That's the business of physicists, and i don't have an answer.
comment by DanielLC · 2011-11-27T21:40:40.689Z · LW(p) · GW(p)
This reminds me of "Why do I have qualia?" I've also asked "Why do I think I have qualia?" I then realized that that's still not quite enough. The right question (or at least one I have to answer first) is "What do I think 'qualia' are?" I'm still thoroughly confused by this question. You could try that with free will too.
comment by hannahelisabeth · 2012-11-18T11:05:49.304Z · LW(p) · GW(p)
"Why does reality exist?"
I think the problem with this question is the use of the word "why." It is generally either a quest for intentionality (eg. "Why did you do that?) or for earlier steps in a causal chain (eg. Why is the sky blue?). So the only type of answer that could properly answer this question is one that introduced a first cause (which is, of course, a concept rife with problems) or one that supposed intentionality in the universe (like, the universe decided to exist as it is or something equally nonsensical). This is probably (part of) why answering this question with the non-explaination "God did it" feels so satisfactory to some--it supposes intentionality and creates a first cause. It makes you feel sated without ever having explained anything, but the question was a wrong one in the first place, because any answer would necessarily lead to another question, since the crux of the question is that of a causal chain.
I think a better question would be "How does reality exist?" as that seems a lot more likely to be answerable.
Replies from: Roho↑ comment by Roho · 2014-06-06T08:06:57.071Z · LW(p) · GW(p)
"Why does reality exist?"
I think the problem with this question is the use of the word "why."
Yes, I think with the question "Why does anything exist at all?", the technique would not go "Why do I think anything exists at all?", but rather: "Why do I think there is a reason for anything to exist at all?"
comment by A1987dM (army1987) · 2013-09-11T10:20:30.439Z · LW(p) · GW(p)
I believe I'm wearing socks, because I can see socks on my feet.
As for me, I mainly believe I'm wearing socks because I can feel socks on my feet. :-)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-09-11T16:44:26.295Z · LW(p) · GW(p)
As near as I can tell, I typically believe I'm wearing socks when I am because I can see and feel I'm wearing shoes, and I'm almost always wearing socks if I'm wearing shoes, and rarely otherwise.
It's hard to say, though. If I pay enough attention I can tell that I'm wearing socks as well (by feel, as you say)... I'm just not sure how often I pay that much attention. (My husband regularly expresses amazement that I'm not more irritated by holes, threadbare patches, etc. in my socks than I report being.)
That said, once I've drilled this far down the rabbit hole of exploring my beliefs with precision, I typically get distracted by the fact that the vast majority of the time I experience no beliefs whatsoever about my socks. It would be more precise to say that I reliably construct the belief I'm wearing socks when my attention is drawn to my feet if I'm wearing shoes at the time, based not so much on any direct evidence of my current sock-wearing as on habits conditioned by previous sock-wearing.
But we frequently use "believe" to refer to that sort of reliably-constructed-on-demand relationship to a proposition.
comment by themusicgod1 · 2015-08-16T06:13:21.911Z · LW(p) · GW(p)
Either way, the question is guaranteed to have an answer. You even have a nice, concrete place to begin tracing—your belief, sitting there solidly in your mind.
In retrospect this seems like an obvious implication of belief in belief. I would have probably never figured it out on my own, but now that I've seen both, I can't unsee the connection.
comment by rasthedestroyer100 · 2016-06-30T11:28:05.728Z · LW(p) · GW(p)
"Tracing back the chain of causality, step by step, I discover that my belief that I'm wearing socks is fully explained by the fact that I'm wearing socks. This is right and proper, as you cannot gain information about something without interacting with it."
Maybe I'm being pedantic on this point, but doesn't the interaction with the socks constitute the act of putting them on which actually fully explains that you're wearing them? Of course, you can go back further along the causal chain to the reason you put them on - perhaps the room was cold, or you had to put on boots to go outside. Regardless, the more we regress backwards on this chain, the farther from the change in state from Socks_off to Socks_on would we find ourselves in the causal sequence of events leading up to your wearing them.
That being said, wouldn't the circular explanation - I'm wearing socks because I'm wearing socks - actually be the explanation arrived at as we approach the causal chain's resulting state of affairs where we're wearing socks instead of not wearing socks? The 'new information' is actually gained by the interaction with the socks preceding our wearing them or our resulting need to explain this phenomenon.
comment by Gram_Stone · 2017-02-28T18:00:55.391Z · LW(p) · GW(p)
"Why was I born as myself rather than someone else?" versus "Why do I think I was born as myself rather than someone else?"
This never got solved in the comments.
I was sitting in microeconomics class in twelfth grade when I asked myself, "Why am I me? Why am I not Kelsey or David or who-have-you?" Then I remembered that there are no souls, that 'I' was a product of my brain, and thus that the existence of my mind necessitates the existence of my body (or something that serves a similar function). Seeing the contradiction, I concluded that I had reasoned, incoherently, as if 'I' were an ontologically fundamental mental entity with some probability of finding itself living some particular lives. That's unsurprising, because as a great deal of cognitive science and Eliezer's free will solution has demonstrated, humans often intuitively evaluate 'possibility' and plausibility by evaluating how easy it is to conceive of something, as a proxy. "I can conceive of 'being someone else,' thus there must be some probability that I 'could have been someone else', so what is the distribution, what is its origin, and what probability does it assign to me being me?"
Replies from: tristanm↑ comment by tristanm · 2017-03-01T03:29:36.164Z · LW(p) · GW(p)
There are also the many bizarre conclusions you can draw from the assumption that the mind you find yourself as was drawn from a probability distribution, such as the doomsday argument.
Replies from: Gram_Stone, jwoodward48↑ comment by Gram_Stone · 2017-03-01T05:02:14.912Z · LW(p) · GW(p)
Can you break that down to the extent that I broke down my confusion above? I'm having a hard time seeing deep similarities between these problems.
Replies from: tristanm↑ comment by tristanm · 2017-03-01T23:08:38.509Z · LW(p) · GW(p)
Like you said, it is conceivable that we could have been someone else, thus it is natural to at least flesh out the possible conclusions that can be reached from that assumption.
If "which mind you find yourself as" was indeed drawn from a probability distribution, then it is natural to believe that our observations about our consciousness are not too far from the mode, and are unlikely to be outliers. And yet, something that I have found surprising since childhood, I seem to find myself as a human mind, in a world where human minds seem to be the most intelligent and "most conscious" out of all the types of minds we find on Earth. This would seem tremendously lucky if it really were possible that we could have been born as something else. Humans are far from the most numerically abundant type of animal.
And so perhaps you would speculate that it could have only been possible to be another human mind, as these minds are the easiest to conceive of being. If you were born as a random human out of all humans that have ever and will ever exist, assuming a uniform distribution, then there is an X% chance you are in the last X% of humans who will ever live. This is a fairly disturbing thought. If you are roughly the 60 billionth human, there is a 50% chance that there will only be ~60 billion more humans. This is the "doomsday paradox." Even if you allocate some probability mass to minds that are not human, you still run into variations of doomsday paradoxes. If the universe will last for trillions of years, then it should be fairly disconcerting that we find ourselves towards the beginning of it, and not during some flourishing interstellar empire with trillions of intelligent minds.
Another possibility is that the distribution is not over time, but only a function of time. In that case, we still have to explain why our experience is likely. Maybe the probability mass is not uniform over all minds, but more mass is allocated to minds that are capable of a greater "amount" of conscious experience. In that case, we would have to conclude that human minds probably have the greatest "capacity" for consciousness, of all minds that currently exist. If that is the case, then we would be surprised to observe any superintelligent aliens, for example (and so far we haven't).
I think it is interesting that the assumption that we could have been a different mind seems to allow us to constrain our expectations about what we observe. I don't particularly hold such a viewpoint, but it is worth considering the logical conclusions in my opinion.
↑ comment by jwoodward48 · 2017-03-03T00:09:37.026Z · LW(p) · GW(p)
Well, the problem with the Doomsday Argument is not the probability distribution, as I see it, but the assumption that we are "typical humans" with a typical perspective. If you think that the most likely cause for the end of humanity would be predictable and known for millennia, ferex, then the assumption does not hold, as we currently do not see a for-sure-end-of-humanity in our future.