The Ultimate Source

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-15T09:01:41.000Z · LW · GW · Legacy · 80 comments

Contents

80 comments

This post is part of the Solution to "Free Will".
Followup toTimeless Control, Possibility and Could-ness

Faced with a burning orphanage, you ponder your next action for long agonizing moments, uncertain of what you will do.  Finally, the thought of a burning child overcomes your fear of fire, and you run into the building and haul out a toddler.

There's a strain of philosophy which says that this scenario is not sufficient for what they call "free will".  It's not enough for your thoughts, your agonizing, your fear and your empathy, to finally give rise to a judgment.  It's not enough to be the source of your decisions.

No, you have to be the ultimate source of your decisions.  If anything else in your past, such as the initial condition of your brain, fully determined your decision, then clearly you did not.

But we already drew this diagram:

Fwmarkov_3

As previously discussed, the left-hand structure is preferred, even given deterministic physics, because it is more local; and because it is not possible to compute the Future without computing the Present as an intermediate.

So it is proper to say, "If-counterfactual the past changed and the present remained the same, the future would remain the same," but not to say, "If the past remained the same and the present changed, the future would remain the same."

Are you the true source of your decision to run into the burning orphanage?  What if your parents once told you that it was right for people to help one another?  What if it were the case that, if your parents hadn't told you so, you wouldn't have run into the burning orphanage?  Doesn't that mean that your parents made the decision for you to run into the burning orphanage, rather than you?

On several grounds, no:

If it were counterfactually the case that your parents hadn't raised you to be good, then it would counterfactually be the case that a different person would stand in front of the burning orphanage.  It would be a different person who arrived at a different decision.  And how can you be anyone other than yourself?  Your parents may have helped pluck you out of Platonic person-space to stand in front of the orphanage, but is that the same as controlling the decision of your point in Platonic person-space?

Or:  If we imagine that your parents had raised you differently, and yet somehow, exactly the same brain had ended up standing in front of the orphanage, then the same action would have resulted.  Your present self and brain, screens off the influence of your parents - this is true even if the past fully determines the future.

But above all:  There is no single true cause of an event.  Causality proceeds in directed acyclic networks.  I see no good way, within the modern understanding of causality, to translate the idea that an event must have a single cause.  Every asteroid large enough to reach Earth's surface could have prevented the assassination of John F. Kennedy, if it had been in the right place to strike Lee Harvey Oswald.  There can be any number of prior events, which if they had counterfactually occurred differently, would have changed the present.  After spending even a small amount of time working with the directed acyclic graphs of causality, the idea that a decision can only have a single true source, sounds just plain odd.

So there is no contradiction between "My decision caused me to run into the burning orphanage", "My upbringing caused me to run into the burning orphanage", "Natural selection built me in such fashion that I ran into the burning orphanage", and so on.  Events have long causal histories, not single true causes.

Knowing the intuitions behind "free will", we can construct other intuition pumps.  The feeling of freedom comes from the combination of not knowing which decision you'll make, and of having the options labeled as primitively reachable in your planning algorithm.  So if we wanted to pump someone's intuition against the argument "Reading superhero comics as a child, is the true source of your decision to rescue those toddlers", we reply:

"But even if you visualize Batman running into the burning building, you might not immediately know which choice you'll make (standard source of feeling free); and you could still take either action if you wanted to (note correctly phrased counterfactual and appeal to primitive reachability).  The comic-book authors didn't visualize this exact scenario or its exact consequences; they didn't agonize about it (they didn't run the decision algorithm you're running).  So the comic-book authors did not make this decision for you.  Though they may have contributed to it being you who stands before the burning orphanage and chooses, rather than someone else."

How could anyone possibly believe that they are the ultimate and only source of their actions?  Do they think they have no past?

If we, for a moment, forget that we know all this that we know, we can see what a believer in "ultimate free will" might say to the comic-book argument:  "Yes, I read comic books as a kid, but the comic books didn't reach into my brain and force me to run into the orphanage.  Other people read comic books and don't become more heroic.  I chose it."

Let's say that you're confronting some complicated moral dilemma that, unlike a burning orphanage, gives you some time to agonize - say, thirty minutes; that ought to be enough time.

You might find, looking over each factor one by one, that none of them seem perfectly decisive - to force a decision entirely on their own.

You might incorrectly conclude that if no one factor is decisive, all of them together can't be decisive, and that there's some extra perfectly decisive thing that is your free will.

Looking back on your decision to run into a burning orphanage, you might reason, "But I could have stayed out of that orphanage, if I'd needed to run into the building next door in order to prevent a nuclear war.  Clearly, burning orphanages don't compel me to enter them.  Therefore, I must have made an extra choice to allow my empathy with children to govern my actions.  My nature does not command me, unless I choose to let it do so."

Well, yes, your empathy with children could have been overridden by your desire to prevent nuclear war, if (counterfactual) that had been at stake.

This is actually a hand-vs.-fingers confusion; all of the factors in your decision, plus the dynamics governing their combination, are your will.  But if you don't realize this, then it will seem like no individual part of yourself has "control" of you, from which you will incorrectly conclude that there is something beyond their sum that is the ultimate source of control.

But this is like reasoning that if no single neuron in your brain could control your choice in spite of every other neuron, then all your neurons together must not control your choice either.

Whenever you reflect, and focus your whole attention down upon a single part of yourself, it will seem that the part does not make your decision, that it is not you, because the you-that-sees could choose to override it (it is a primitively reachable option).  But when all of the parts of yourself that you see, and all the parts that you do not see, are added up together, they are you; they are even that which reflects upon itself.

So now we have the intuitions that:

The combination of these intuitions has led philosophy into strange veins indeed.

I once saw one such vein described neatly in terms of "Author" control and "Author*" control, though I can't seem to find or look up the paper.

Consider the control that an Author has over the characters in their books.  Say, the sort of control that I have over Brennan.

By an act of will, I can make Brennan decide to step off a cliff.  I can also, by an act of will, control Brennan's inner nature; I can make him more or less heroic, empathic, kindly, wise, angry, or sorrowful.  I can even make Brennan stupider, or smarter up to the limits of my own intelligence.  I am entirely responsible for Brennan's past, both the good parts and the bad parts; I decided everything that would happen to him, over the course of his whole life.

So you might think that having Author-like control over ourselves - which we obviously don't - would at least be sufficient for free will.

But wait!  Why did I decide that Brennan would decide to join the Bayesian Conspiracy?  Well, it is in character for Brennan to do so, at that stage of his life.  But if this had not been true of Brennan, I would have chosen a different character that would join the Bayesian Conspiracy, because I wanted to write about the beisutsukai.  Could I have chosen not to want to write about the Bayesian Conspiracy?

To have Author* self-control is not only have control over your entire existence and past, but to have initially written your entire existence and past, without having been previously influenced by it - the way that I invented Brennan's life without having previously lived it.  To choose yourself into existence this way, would be Author* control.  (If I remember the paper correctly.)

Paradoxical?  Yes, of course.  The point of the paper was that Author* control is what would be required to be the "ultimate source of your own actions", the way some philosophers seemed to define it.

I don't see how you could manage Author* self-control even with a time machine.

I could write a story in which Jane went back in time and created herself from raw atoms using her knowledge of Artificial Intelligence, and then Jane oversaw and orchestrated her own entire childhood up to the point she went back in time.  Within the story, Jane would have control over her existence and past - but not without having been "previously" influenced by them.  And I, as an outside author, would have chosen which Jane went back in time and recreated herself.  If I needed Jane to be a bartender, she would be one.

Even in the unlikely event that, in real life, it is possible to create closed timelike curves, and we find that a self-recreating Jane emerges from the time machine without benefit of human intervention, that Jane still would not have Author* control.  She would not have written her own life without having been "previously" influenced by it.  She might preserve her personality; but would she have originally created it?  And you could stand outside time and look at the cycle, and ask, "Why is this cycle here?"  The answer to that would presumably lie within the laws of physics, rather than Jane having written the laws of physics to create herself.

And you run into exactly the same trouble, if you try to have yourself be the sole ultimate Author* source of even a single particular decision made by you - which is to say it was decided by your beliefs, inculcated morals, evolved emotions, etc. - which is to say your brain calculated it - which is to say physics determined it.  You can't have Author* control over one single decision, even with a time machine.

So a philosopher would say:  Either we don't have free will, or free will doesn't require being the sole ultimate Author* source of your own decisions, QED.

I have a somewhat different perspective, and say:  Your sensation of freely choosing, clearly does not provide you with trustworthy information to the effect that you are the 'ultimate and only source' of your own actions.  This being the case, why attempt to interpret the sensation as having such a meaning, and then say that the sensation is false?

Surely, if we want to know which meaning to attach to a confusing sensation, we should ask why the sensation is there, and under what conditions it is present or absent.

Then I could say something like:  "This sensation of freedom occurs when I believe that I can carry out, without interference, each of multiple actions, such that I do not yet know which of them I will take, but I am in the process of judging their consequences according to my emotions and morals."

This is a condition that can fail in the presence of jail cells, or a decision so overwhelmingly forced that I never perceived any uncertainty about it.

There - now my sensation of freedom indicates something coherent; and most of the time, I will have no reason to doubt the sensation's veracity.  I have no problems about saying that I have "free will" appropriately defined; so long as I am out of jail, uncertain of my own future decision, and living in a lawful universe that gave me emotions and morals whose interaction determines my choices.

Certainly I do not "lack free will" if that means I am in jail, or never uncertain of my future decisions, or in a brain-state where my emotions and morals fail to determine my actions in the usual way.

Usually I don't talk about "free will" at all, of course!  That would be asking for trouble - no, begging for trouble - since the other person doesn't know about my redefinition.  The phrase means far too many things to far too many people, and you could make a good case for tossing it out the window.

But I generally prefer to reinterpret my sensations sensibly, as opposed to refuting a confused interpretation and then calling the sensation "false".

80 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Ian_C. · 2008-06-15T10:01:52.000Z · LW(p) · GW(p)

If you model causality as existing not between two events, but between an object and it's actions, then you explain the regularity of the universe while also allowing for self-directed entities (i.e. causal chains only have to go back as far as the originating entity instead of the Big Bang).

comment by Matthew_C.2 · 2008-06-15T13:43:03.000Z · LW(p) · GW(p)

No, you have to be the ultimate source of your decisions. If anything else in your past, such as the initial condition of your brain, fully determined your decision, then clearly you did not.

Words like "you" are far more problematic than words like "consciousness" that you eschew.

After all, even a young infant shows unmistakable signs of awareness, while the "I" self-concept doesn't arise until the middle of the toddler stage. The problem with free will is that there is no actual "you" entity to have it. The "you" is simply a conceptual place-holder built up from ideas of an individual body and its sensations.

comment by Hopefully_Anonymous · 2008-06-15T13:47:11.000Z · LW(p) · GW(p)

"There - now my sensation of freedom indicates something coherent; and most of the time, I will have no reason to doubt the sensation's veracity. I have no problems about saying that I have "free will" appropriately defined; so long as I am out of jail, uncertain of my own future decision, and living in a lawful universe that gave me emotions and morals whose interaction determines my choices."

  1. If "most of the time, [you] will have no reason to doubt the sensation's veracity", then I encourage you to read more neuroscience. Most of the time, you should have plenty of reason to doubt the veracity of your sensations, including regarding "free will".
  2. I think this post largely argues against strawmen- there are better arguments for you to address right in the comments of your recent blog posts.
  3. I'd posit that you keep returning to the saving-the-orphan example as a bias-seeking way to support your argument, rather than as a bias-overcoming approach, for reasons I already explained a couple times in the comment sections of your recent posts.
comment by Unknown · 2008-06-15T13:56:28.000Z · LW(p) · GW(p)

HA, both here and in your comments on the previous posts, you have continuously given the impression that you don't know what Eliezer is talking about.

comment by Hopefully_Anonymous · 2008-06-15T15:11:55.000Z · LW(p) · GW(p)

Unknown, I think we'll have to leave that to the judgement of the audience of this blog. Personally, I think Eliezer presented what he's talking about pretty clearly, and at this stage I don't think it does much good to repeat my criticisms of his conclusions, beyond encouraging people to read not just the argument-from-quantum-physics-plus-one's-own-sensory-impressions, but the best neuroscience research results on the biology behind the sensations of choice and "free will".

comment by Kip_Werking · 2008-06-15T15:21:14.000Z · LW(p) · GW(p)

Eliezer,

You may be referring to my draft paper "THE VIEW FROM NOWHERE THROUGH A DISTORTED LENS: THE EVOLUTION OF COGNITIVE BIASES FAVORING BELIEF IN FREE WILL". I don't think I've bothered to keep the paper online, but I remember you having read at least part of it, and the latest draft distinguishes between "actual control" and "novelist control". I believe earlier drafts referred to "control" and "control*".

I'm really glad to see someone as bright as you discussing free will. Here are some comments on this post:

  1. Like you, I think "The phrase means far too many things to far too many people, and you could make a good case for tossing it out the window." And like you, I nevertheless find myself strongly pulled towards one view in the debate, and writing page after page defending it. Maybe saying that free will is poorly defined just isn't enough fun to satisfy me.

On the Garden of Forking Paths I said something to the effect (can't find the post now): Mathematicians, because they strictly define their terms, have no difficulty admitting when a problem is too vague to have a solution. Just look at the list of Hilbert's 23 problems:

http://en.wikipedia.org/wiki/Hilbert's_problems

Many of them, like the 4th and 21st problems, are resolved and the answer is "we don't know! you have to be more precise! what exactly are you asking?"

Philosophers do not seem to have the same ability. I can't think of a single problem, involving any of philosopher's favorite fuzzy words like "God", "soul", "evil", "consciousness", "right", "wrong", "knowledge", where philosophers have said, with consensus, "actually, we figured out that the particular question doesn't have an answer, because you have to be more precise with your terms." And philosophers don't like to argue about terms that refer uncontroversially (or much less controversially) to things we can inspect in the real world, like the laptop on which I'm writing this post. They prefer to argue things that remain arguable.

(It makes me wonder whether philosophers have perverse incentives, like in the medical profession, to actually not solve problems, but keep them alive and worked on.)

  1. Personally, I lean towards no-free-will views. And, in doing that, I defend what I call a cognitive-biases+semantic-ambiguity view. The semantic ambiguity part is, as I just discussed, the idea that "free will" is too vague to work with.

[On this note, we shouldn't just stop when we come to this conclusion, and defend our pet-favorite-definition, or lack thereof, without convincing anybody else. If we say "free will" is poorly defined, and nobody believes us, because they all prefer their favorite definitions of free will, with which their positions in the debate win, we won't get anywhere. Instead, what are needed, I think, are large scale studies/surveys investigating how people use 'free will', and what they think the term means. Such studies should show, if we are right, that there is enormous variation in how people use the term, and what they think it means, and that people hardly use the term at all anyway. Then we would have knock-down evidence that should persuade many or most of the (more reasonable) philosophers working on this topic.]

The other part of my view, the cognitive biases view, is the part that pulls me to no-free-will-ism. This is what I discuss in my paper, mentioned above, about novelist control. I remember you rightly accusing me of having thrown "the kitchen sink" at the problem. While there is certainly a kernal of truth to that, and I would like to rewrite several paragraphs in the paper, I stand by most of what I wrote, and note in my defense that I only discuss about 15 of the approximately 100 biases listed on Wikipedia---I tried to leave much of the sink alone.

And while I see that you discuss a few cognitive biases / confusing sensations related to "free will", you don't mention ones I would consider important: the fundamental attribution error, the illusion of control, the just world phenomenon, and positive-outcome bias, etc.

  1. My pet definition of free will. You seem to have your own favorite definition of free will, with which compatibilism wins (and an extreme one at that, based on your comment in the other post about a person still being responsible despite just being instantiated a couple of seconds ago to commit some good/bad deed). Although I think the meaning of "free will" should be determined by how people tend to use the term, I have my own favorite definition, on which we don't have free will. I prefer my definition to yours for at least the following reasons:

A. On your definition, free will is something that people uncontroversially have. Nobody ever doubted that people have the sort of local control you discuss. Nobody ever doubted that people are more like rocks than computers. So, compatibilist definitions of free will are boring, and odd, to me for at least that reason.

In contrast, although it would be absurd for people to believe they have novelist control or something like it, it is not absurd to believe that people often believe absurdities, especially positive, anthropocentric ones about themselves, their special possessions, powers, and abilities, and their place in the universe. This is the same species that believed the sun revolved around the earth, a loving God created us and wants us to worship him, that we all possess immaterial souls, etc.

Thus, if you're willing to say that God, souls, etc., do not exist, but draw the line and say "wait a minute, I'm willing to deny the existence of all of these other absurdities, but I'm not going to give you free will. [Maybe adding: that cuts too close]. I'm even willing to redefine the term, as Dennett does, before admitting defeat", then you fit Tamler Sommer's wonderful observation that "[p]hilosophers who reject God, Cartesian dualism, souls, noumenal selves, and even objective morality, cannot bring themselves to do the same for the concepts of free will and moral responsibility." There seems to be some tension here.

B. On my pet definition of free will, the one I came into the debate with, and strongly feel pulled towards, free will is that power which solved an apparent problem: that my entire life destiny was fixed, before I was born, by circumstances outside of my control. This is what disturbed me (or alleviated me, depending on my mood, I suppose), when I first considered the problem. And, more importantly, this is what I think motivated most people, today and throughout history, when discussing free will. Going on the way back to the Greeks, then to Augustine and the Middle Ages, through the scientific revolution, when people were talking about free will, they were generally talking about this problem: that our fate is fixed before we are born (at least if the world is deterministic, as seemed plausible for so long and even today; and if it isn't deterministic, that doesn't seem to help).

In other words, when people were talking about free will, they were not considering the uncontroversial, local control and powers they have. Nobody said "hmm, even if an alien created me five seconds ago to pick up this apple, and implanted within me a desire to pick up this apple, and therefore now I have that desire, and look, lo and behold, I am picking up the apple. What should I call this amazing, beautiful, wonderful power? I know, let's call it free will!" Admitting that this is a bit of a straw man, but with a good point behind it, I submit that nobody ever talked about free will in a way even remotely close to this.

The point is this: you, Eliezer (and Dennett, McKenna etc.) might be cool customers, but the idea of an alien/God/machine creating me five seconds ago, implanting within me a desire/value to pick up an apple, and then having the local control to act on that desire/value SCARES THE LIVING FU** OUT OF PEOPLE—and not just because of the alien/God/machine.

Nobody, except for a handful of clever intellectuals like yourself, ever thought that free will was supposed to be consistent with situations like that. Rather, my strong suspicion (the reason I lean towards "free will doesn't exist" instead of "what is free will? tell me what it means and I'll tell you if it exists") is that "free will" was designed and intended to protect us from exactly and precisely that vulnerability.

Of course, nothing can protect us from that vulnerability. We can't build our own lives/characters, even with a time machine; we're denied by logic even more than physics. So free will never developed a clear definition. In accordance with the law of conjunction, the more philosophers said about free will (or God), the more details crafty philosophers were able to knock out. And so the terms shed more and more of itself (like the Y chromosome) until it was little more than LISP token: that thing that protects us from our fates being fixed before we're born. How? "Shhhhh. Silly child, we're not supposed to ask such questions.

This is at least a rough sketch of where I stand on the free will debate, one of the few intellectual topics on which I feel knowledgeable enough to really engage you. I work a lot, and don't read about free will as much as I used to, but this is my current position. I think we just need more data.

comment by Caledonian2 · 2008-06-15T15:24:59.000Z · LW(p) · GW(p)

Remarkable. Even the people who speculate that philosophers are deliberately not solving problems are refusing to carry out the necessary first step to solving them: defining their terms.

What quirk of human psychology could be responsible for this behavior?

comment by Robin_Z · 2008-06-15T16:09:52.000Z · LW(p) · GW(p)

Kip Werking, I can see where you're coming from, but "free will" isn't just some attempt to escape fatalism. Look at Eliezer's post: something we recognize as "free will" appears whenever we undergo introspection, for example. Or look at legal cases: acts are prosecuted entirely differently if they are not done of one's "free will", contracts are annulled if the signatories did not sign of their own "free will". We praise good deeds and deplore evil deeds that are done of one's own "free will". Annihilation of free will requires rebuilding all of these again from their very foundations - why do so, then, when one may be confident that a reasonable reading of the term exists?

comment by A.S. · 2008-06-15T16:35:27.000Z · LW(p) · GW(p)

Let's assume I can make a simulated world with lots of carefully scripted NPC's and with a script for the Main Character (full of interesting adventures like saving the galaxy), which somehow is forced upon a conscious being by means of some "exoself". Then I erase my memory and cease to be my old self, becoming this MC. Each of my actions is enforced by the exoself, I cannot do a single thing that isn't in the script. But of course I'm unaware of that (there are no extremely suspiciously unexplainable actions in the script) and still have all of the sensations I have right now - my consciousness explains each of my actions as having some reasons inside myself.

This seemed to me an example of Author* self-control at first (seemingly paradoxically lacking "free will"), but it's not really MC who had written the script, it is essensially another person. So I just leave it here as a slight exaggeration of our current state. Of course, we don't have such scripts (at least I hope so), but since (due to the neuroscience research Hopefully Anonymous obviously talks about) our actions are not determined by our conscious decisions, the situation is not totally different. Our unconscious mind can be viewed as a kind of exoself.

Replies from: ThisDan
comment by ThisDan · 2012-12-22T02:41:41.168Z · LW(p) · GW(p)

Yes and the unconscious comes from where? The input from the deterministic universe. So if unconscious is the exoself then the exoself is just the universe- not "you" or anyone at all. It is just is.

comment by poke · 2008-06-15T17:16:21.000Z · LW(p) · GW(p)

You essentially posit a "decision algorithm" to which you ascribe the sensations most people attribute to free will. I don't think this is helpful and it seems like a cop-out to me. What if the way the brain makes decisions doesn't translate well onto the philosophical apparatus of possibility and choice? You're just trading "suggestively named LISP tokens" for suggestively named algorithms. But even if the brain does do something we could gloss in technical language as "making choices among possibilities" there still aren't really possibilities and hence choices.

What it all comes down to, as you acknowledge (somewhat), is redefining terms. But if you're going to do that, why not say, "none of this really matters, use language how you will"? Actually, a lot of your essays have these little disclaimers at the end, where you essentially say "at least that's how I choose to use these words." Why not headline with that?

There are basically three issues with any of these loaded terms - free will, choice, morality, consciousness, etc - that need to be addressed: (1) the word as a token and whether we want to define it and how; (2) matters the "common folk" want reassurance on, such as whether they should assume a fatalistic outlook in the face of determinism, whether their neighbors will go on killing sprees if morality isn't made out of quarks, etc; (3) the philosophical problem of free will, problem of morality, etc.

Philosophers have made a living trying to convince us that their abstract arguments have some relevance to the concerns of the common man and that if we ignore them we're being insensitive or reductionist and are guilty of scientism and fail to appreciate the relevance of the humanities. That's egregious nonsense. Really these are three entirely separate issues. I get the impression that you actually think these problems are pseudo-problems but at the same time you tend to run issues 2 and 3 together in your discussions. Once you separate them out, though, I think the issues become trivial. It's obvious determinism shouldn't make us fatalistic because we weren't fatalistic before and nothing has changed, it's obvious we won't engage in immoral behavior if morals aren't "in the world" since we weren't immoral before and nothing has changed, etc.

comment by Patrick_(orthonormal) · 2008-06-15T17:34:59.000Z · LW(p) · GW(p)

Usually I don't talk about "free will" at all, of course! That would be asking for trouble - no, begging for trouble - since the other person doesn't know about my redefinition.

Boy, have we ever seen that illustrated in the comments on your last two posts; just replace "know" with "care". I think people have been reading their own interpretations into yours, which is a shame: your explanation as the experience of a decision algorithm is more coherent and illuminating than my previous articulation of the feeling of free will (i.e. lack of feeling of external constraint). Thanks for the new interpretation.

Hopefully Anonymous:

If I understand you correctly on calling the feeling of deliberation an epiphenomenon, do you agree that those who report deliberating on a straightforward problem (say, a chess problem) tend to make better decisions than those who report not deliberating on it? Then it seems that some actual decision algorithm is operating, analogously to the one the person claims to experience.

Do you then think that moral deliberation is characteristically different from strategic deliberation? If so, then I partially agree, and I think this might be the crux of your objection: that in moral decisions, we often hide our real objectives from our conscious selves, and look to justify those hidden motives. While in chess, there's very little sense of "looking for a reason to move the rook" as a high priority, the sort of motivated cognition this describes is pretty ubiquitous in human moral decision.

However, what I think Eliezer might reply to this is that there still is a process of deliberation going on; the ultimate decision does tend to achieve our goals far better than a random decision, and that's best explained by the running of some decision algorithm. The fact that the goals we pursue aren't always the ones we state— even to ourselves— doesn't prevent this from being a real deliberation; it just means that our experience of the deliberation is false to the reality of it.

comment by bambi · 2008-06-15T18:14:48.000Z · LW(p) · GW(p)

There are many terms and concepts that don't pay for themselves, though we might not agree on which ones. For example, I think Goedel's Theorem is one of them... its cuteness and abstract splendor doesn't offset the dumbness it invokes in people trying to apply it. "Consciousness" and "Free Will" are two more.

If the point here is to remove future objections to the idea that AI programs can make choices and still be deterministic, I guess that's fair but maybe a bit pedantic.

Personally I provisionally accept the basic deterministic reductionist view that Eliezer has been sketching out. "Provisionally" because our fundamental view of reality and our place in it has gone through many transformations throughout history and it seems unlikely that exactly today is where such revelations end. But since we don't know what might be next we work with what we have even though it is likely to look naive in retrospect from the future.

The viewpoint also serves to make me happy and relatively carefree... doing important things is fun, achieving successes is rewarding, helping people makes you feel good. Obsessive worry and having the weight of the world on one's shoulders is not fun. "Do what's fun" is probably not the intended lesson to young rationalists, but it works for me!

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-15T18:52:54.000Z · LW(p) · GW(p)

Kip Werking:

A. On your definition, free will is something that people uncontroversially have. Nobody ever doubted that people have the sort of local control you discuss. Nobody ever doubted that people are more like rocks than computers. So, compatibilist definitions of free will are boring, and odd, to me for at least that reason.

I say, "Hooray, I made it add up to normality!" Philosophy should be as normal as possible, but no more normal than that.

Yes, I probably was referring to your paper.

Thus, if you're willing to say that God, souls, etc., do not exist, but draw the line and say "wait a minute, I'm willing to deny the existence of all of these other absurdities, but I'm not going to give you free will. [Maybe adding: that cuts too close]. I'm even willing to redefine the term, as Dennett does, before admitting defeat", then you fit Tamler Sommer's wonderful observation that "[p]hilosophers who reject God, Cartesian dualism, souls, noumenal selves, and even objective morality, cannot bring themselves to do the same for the concepts of free will and moral responsibility." There seems to be some tension here.

The primary thing I want to save is the sensation of freedom - once you know what it does indicate as a matter of psychological causality, there's no reason to interpret it as meaning anything else. As for moral responsibility, that's a question of morality and would take us into a whole different class of arguments.

I am not attached to the phrase "free will", though I do take a certain amount of pride in knowing exactly which confusion it refers to, and even having saved the words as still meaning something. Most of the philosophical literature surrounding it - with certain exceptions such as your own work! - fails to drive at either psychology or reduction, and can be discarded without loss.

But the sensations that people feel when choosing, and the phenomenon of choice itself, is in a different class from belief in God. Choice may not work the way people think it does, but they do, in fact, choose (in this our lawful universe).

By the way, I sometimes think of "soul" as referring to substrate-independent personal identity, though of course I don't use the word that way in my writing. You'll note that Jeffreyssai did, though.

The point is this: you, Eliezer (and Dennett, McKenna etc.) might be cool customers, but the idea of an alien/God/machine creating me five seconds ago, implanting within me a desire/value to pick up an apple, and then having the local control to act on that desire/value SCARES THE LIVING FU** OUT OF PEOPLE—and not just because of the alien/God/machine.

This might not give rise to a sensation of freedom, if the desire to pick up the apple is strong enough that there is never a moment of personal uncertainty about the choice.

Fear of being manipulated by an alien is common-sensically in a whole different class from fear of being deterministic within physics. You've got to worry about what else the alien might be planning for you; it's a new player on the board, and a player who occupies an immensely superior position.

Natural selection is sort of an intermediate case between aliens and physics. Evolution manipulates you, creates impulses within you that were actually chosen by criteria at cross-purposes to your deliberate goals, i.e., you think you're an altruist but evolution ensures that you'll want to hold on to power. But evolution is stupid and can be understood using finite human effort; it is not a smarter alien.

The alien is part of what scares people, that's why evolutionary psychology creates stronger fears than deterministic physics.

B. On my pet definition of free will, the one I came into the debate with, and strongly feel pulled towards, free will is that power which solved an apparent problem: that my entire life destiny was fixed, before I was born, by circumstances outside of my control.

"Before" = mixing timeful and timeless perspectives

Your entire life destiny is deterministic(ally branching) given the past, but it was not written before you were born.

I sometimes say, "The future is written and we are the writing."

So you can see why I might want to rescue even "free will" and not just the sensation of freedom; what people fear, when they fear they do not have free will, is not the awful truth.

Replies from: Benito
comment by Ben Pace (Benito) · 2013-07-14T09:28:03.493Z · LW(p) · GW(p)

A. On your definition, free will is something that people uncontroversially have. Nobody ever doubted that people have the sort of local control you discuss. Nobody ever doubted that people are more like rocks than computers. So, compatibilist definitions of free will are boring, and odd, to me for at least that reason.

I say, "Hooray, I made it add up to normality!" Philosophy should be as normal as possible, but no more normal than that.

This is excellent, and should be upvoted more.

comment by Hopefully_Anonymous · 2008-06-15T19:01:07.000Z · LW(p) · GW(p)

"If I understand you correctly on calling the feeling of deliberation an epiphenomenon, do you agree that those who report deliberating on a straightforward problem (say, a chess problem) tend to make better decisions than those who report not deliberating on it? Then it seems that some actual decision algorithm is operating, analogously to the one the person claims to experience."

"However, what I think Eliezer might reply to this is that there still is a process of deliberation going on; the ultimate decision does tend to achieve our goals far better than a random decision, and that's best explained by the running of some decision algorithm. The fact that the goals we pursue aren't always the ones we state— even to ourselves— doesn't prevent this from being a real deliberation;"

Patrick,

Those are interesting empirical questions. Why jump to the conclusion? Also, I think it'll be instructive to check the latest neuroscience research on them. We no longer need to go straight to our intuitions as a beginning and end point.

Secondly, an illusion/myth/hallucination may be that you have the ultimate capacity to choose between "deliberation" (running some sort of decision tree/algorithm) and a random choice process in each given life instance, and that illusion could be based on common cognitive biases regarding how our brains work, similar to the effect of various cognitive biases that skew our other intuitions about self and reality.

comment by Hopefully_Anonymous · 2008-06-15T19:14:46.000Z · LW(p) · GW(p)

"The primary thing I want to save is the sensation of freedom"

"So you can see why I might want to rescue even "free will" and not just the sensation of freedom; what people fear, when they fear they do not have free will, is not the awful truth."

Eliezer, I think your desire to preserve the concept of "freedom" is conflicting [or at the least has the potential to conflict] with your desire to provide the best models of reality.

"Fear of being manipulated by an alien is common-sensically in a whole different class from fear of being deterministic within physics. You've got to worry about what else the alien might be planning for you; it's a new player on the board, and a player who occupies an immensely superior position.

Natural selection is sort of an intermediate case between aliens and physics. Evolution manipulates you, creates impulses within you that were actually chosen by criteria at cross-purposes to your deliberate goals, i.e., you think you're an altruist but evolution ensures that you'll want to hold on to power. But evolution is stupid and can be understood using finite human effort; it is not a smarter alien.

The alien is part of what scares people, that's why evolutionary psychology creates stronger fears than deterministic physics."

How is that common sensical? It seems to me to be arbitrary to fear harm by deterministic physics less than harm by alien manipulation (not to mention that the latter would seem at worst to be a subset of the former, and at best good news that we don't live in a fatally deterministic reality). The same applies to deterministic physics vs. evo psych.

I feel like your playing politics here, attempting to cobble together support for your narrative by being reciprocative for elements of other commenters narrative. I think this is a very different thing than attempting to overcome bias to provide more accurate models of reality.

comment by Patrick_(orthonormal) · 2008-06-15T21:11:48.000Z · LW(p) · GW(p)

HA:

Those are interesting empirical questions. Why jump to the conclusion?

I didn't claim it was a proof that some sort of algorithm was running; but given the overall increased effectiveness at maximizing utility that seems to come with the experience of deliberation, I'd say it's a very strongly supported hypothesis. (And to abuse a mathematical principle, the Church-Turing Thesis lends credence to the hypothesis: you can't consistently compete with a good algorithm unless you're somehow running a good algorithm.)

Do you have a specific hypothesis you think is better, or specific evidence that contradicts the hypothesis that some good decision algorithm is generally running during a deliberation?

Also, I think it'll be instructive to check the latest neuroscience research on them. We no longer need to go straight to our intuitions as a beginning and end point.

Oh, I agree, and I'm fascinated too by modern neuroscientific research into cognition. It just seems to me that what I've read supports the hypothesis above.

I wonder if you're bothered by Eliezer's frequent references to our intuitions of our cognition rather than sticking to a more outside view of it. It seems to me that his picture of "free will as experience of a decision algorithm" does find support from the more objective outside view, but that he's also trying to "dissolve the question" for those whose intuitions of introspection make an outside account "feel wrong" at first glance. It doesn't seem that's quite the problem for you, but it's enough of a problem for others that I think he's justified in spending time there.

Secondly, an illusion/myth/hallucination may be that you have the ultimate capacity to choose between "deliberation" (running some sort of decision tree/algorithm) and a random choice process in each given life instance...

Again, I don't think that anyone actually chooses randomly; even the worst decisions come out with far too much order for that to be the case. There is a major difference in how aware people are of their real deliberations (which chiefly amounts to how honest they are with themselves), and those who seem more aware tend to make better decisions and be more comfortable with them. That's a reason why I choose to try and reflect on my own deliberations and deliberate more honestly.

I don't need some "ultimate capacity" to not-X in order for X to be (or feel like, if you prefer) my choice, though; I just need to have visualized the alternatives, seen no intrinsic impediments and felt no external constraints. That's the upshot of this reinterpretation of free will, which both coincides with our feeling of freedom and doesn't require metaphysical entities.

comment by Patrick_(orthonormal) · 2008-06-15T21:12:30.000Z · LW(p) · GW(p)

Oh, dang it.

comment by Kip_Werking · 2008-06-15T21:43:15.000Z · LW(p) · GW(p)

Eliezer,

"I am not attached to the phrase "free will", though I do take a certain amount of pride in knowing exactly which confusion it refers to, and even having saved the words as still meaning something. Most of the philosophical literature surrounding it - with certain exceptions such as your own work! - fails to drive at either psychology or reduction, and can be discarded without loss."

Your modesty is breathtaking!

"Fear of being manipulated by an alien is common-sensically in a whole different class from fear of being deterministic within physics. You've got to worry about what else the alien might be planning for you; it's a new player on the board, and a player who occupies an immensely superior position."

Sure, they are not identical. But they are relevantly similar, because whether the-world-before-you-were-born or God/aliens/machine did the work, it wasn't you and that's what people want, they want to be doing whatever it is that the-world-before-you-were-born or God/aliens/machine did. At least, they want it to be the case that they are not vulnerable to the whims of these entities. If God/alien/machine might be saintly or malicious, and design my life accordingly, the thousand monkeys of natural selection banging away on their typewriters hardly makes people feel better about free will.

"Your entire life destiny is deterministic(ally branching) given the past, but it was not written before you were born."

I didn't say "written", I said "fixed". And it clearly is fixed. Given determinism, there is only one future, and that future is fixed/settled/decided/unchangeable - however you want to say it - given the laws of nature and initial state.

"So you can see why I might want to rescue even "free will" and not just the sensation of freedom; what people fear, when they fear they do not have free will, is not the awful truth."

Well, this is an empirical claim, and data may one day decide it. It seems to me that your view is:

  1. People think of free will as that power which prevents it from being the case that state A of the universe determines state C regardless of what person B, in between does.

While I think:

  1. People think of free will as that power which prevents it from being the case that our destinies are fixed before we are born.

Regarding 1, I think non-specialists (and specialists) can make huge mistakes when thinking about free will. But I don't think this is one of them. I don't think anybody worries about it being the case that:

"If I stop typing this post right now, the post will still get typed, because, damn it, I don't have free will! No matter what, the post will get typed. I could go take a shower, wash my hair, and drive to Maryland, but those keys will still be magically clicking away. And that terrifies me! I hope I have free will, so I can prevent that from happening."

Nobody thinks that, just as nobody thinks "I have a desire to pick up an apple, I'm not sure exactly where it came from, but it sure is powerful (perhaps not so powerful as to leave no doubt in my mind about whether I will pick it up---if that detail concerns you), but powerful enough, and look, lo and behold, I am exercising my local control over the apple, to satisfy my desire, whereever it came from, and now I am picking it up! What should I call this marvelous, wonderful power? Let's call it free will." Nobody said that either.

The one thing people have said, since the Greeks, through the Middle Ages, through the Scientific Revolution, and onward is: if determinism is true, my life destiny is fixed before I am born, and if indeterminism is true, that doesn't help. I sure I hope I have free will, so I can prevent this from being the case.

But I don't have any data to support these assertions about what people think when they worry about free will and use the term. I don't think anybody has that data, and the controversy may not be resolved until someone does (and perhaps not even then).

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-15T21:49:06.000Z · LW(p) · GW(p)

Kip, the problem word is simply before. Your destiny is fixed, but it is not fixed before you were born. If you look at it timelessly, the whole thing just exists; if you do look at it timefully, then of course the future comes after the present, not before it, and is caused by your decisions.

Replies from: TAG
comment by TAG · 2020-10-15T13:19:09.286Z · LW(p) · GW(p)

Your destiny is fixed, but it is not fixed before you were born.

if you do look at it timefully, then of course the future comes after the present, not before it, and is caused by your decisions

Determinism doesnt allow any special status for the complex physical events known as decisions. So it doesn't support folk intutions that human choices are making a real difference.

comment by Kip_Werking · 2008-06-15T22:13:27.000Z · LW(p) · GW(p)

Eliezer,

The subtle ambiguity here is between two meanings of "is fixed":

  1. going from a state of being unfixed to a state of being fixed
  2. being in a state of being fixed

I think you were are interpreting me to mean 1. I only meant 2, and that's all that I need. That the future is fixed2, before I am born, is what disturbs people, regardless of when the moment of fixing1 happens (if any).

KTW

comment by Will_Pearson · 2008-06-15T22:34:59.000Z · LW(p) · GW(p)

Yudowsky: My problem with this view is I don't know when other people/entities are "making choices". Is my computer making a choice to follow my suggestion on which operating system to boot up each time it is switched on (some times it gets impatient and makes up its own mind!)? Is it a moral decision maker? If not, at what sophistication does it become so.

Lots of this has been covered by Dennett, although I am not quite sure how much of his philosophy you take on. E.g. are you a Realist (capital R) for beliefs?

comment by Caledonian2 · 2008-06-15T23:59:39.000Z · LW(p) · GW(p)
Your destiny is fixed, but it is not fixed before you were born.

No. [<-- Useless single-line flat contradiction that can be deleted without affecting any of the actual arguments. EY.] You like to claim that "determined does not mean predetermined", but of course that's precisely what it means. If the state at time 10 is determined by time 9, and time 9 by time 8, etc., then it follows that the state at time 10 is completely derived from the state at time 1. The only way events can be not predetermined is if they're not determined by the events before them.

Your destiny may be fixed, or it may not, but whether it's fixed is irrelevant - what matters is whether it can be known. Our ignorance makes determinism indistinguishable from chance.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-16T00:05:24.000Z · LW(p) · GW(p)

The laws of physics are symmetrical, and if the future can be known perfectly given the past, so too, the past can be perfectly known given the future.

If you're going to ignore the causal structure and step outside time and look over the Block Universe, then you might as well say that the past was already determined 50 years later.

You might as well say that you can't possibly choose to run into the burning orphanage, because your decision was fully determined by the future fact that the child was saved.

If you are going to talk about time, the future comes after the present.

If you are going to talk about causal structure, the present screens off the past.

If you are going to throw that away and talk about determinism, then you're talking about a timeless mathematical object and it makes no sense to use words like "before".

Replies from: TAG
comment by TAG · 2020-10-15T14:03:42.673Z · LW(p) · GW(p)

If you are going to throw that away and talk about determinism, then you’re talking about a timeless mathematical object and it makes no sense to use words like “before”

Block universe theory isn't implied by deteminism.

But even if it us the case , it would only mean that predermination isn't any truer than postdetrmination. If technically true, that doesn't address typical concerns about free will.

comment by Caledonian2 · 2008-06-16T00:39:14.000Z · LW(p) · GW(p)

The laws of physics are symmetrical, and if the future can be known perfectly given the past, so too, the past can be perfectly known given the future.
IF.

Which is the key to the matter. Whether the future is defined by the past is irrelevant - what matters is whether the nature of that determination can be known. And it can't.

So your 'if' clause fails, because it posits an impossible event. The future cannot be known.

And that's the key to understanding why we speak of choice. We can easily comprehend how one of our machines works, and we easily see that given knowledge of its states we can be highly accurate in predicting how it will change with time. But human minds are too complex for us to do this - so we attribute its operations, which we cannot comprehend, to chance. Ignorance is the key.

comment by Kip_Werking · 2008-06-16T01:00:54.000Z · LW(p) · GW(p)

"You might as well say that you can't possibly choose to run into the burning orphanage, because your decision was fully determined by the future fact that the child was saved."

I don't see how that even begins to follow from what I've said, which is just that the future is fixed2 before I was born. The fixed2 future might be that I choose to save the child, and that I do so. That is all consistent with my claim; I'm not denying that anyone chooses anything.

"If you are going to talk about causal structure, the present screens off the past."

If only that were true! Unfortunately, even non-specialists have no difficulty tracing causal chains well into the past. The present might screen off the past if the laws of physics are were asymmetrical (if multiple pasts could map onto the same future)---but this is precisely what you deny in the same comment. The present doesn't screen off the past. A casual observation of a billiards game shows this: ball A causes ball B to move, hitting ball C, which causes ball C to move, hitting ball D, etc. (Caledion makes the same point above).

I'm not sure how long your willing to keep the dialogue going (as Honderich says "this problems gets a hold on you" and doesn't let go), but I appreciate your responses. There's a link from the Garden of Forking Paths here now, too.

comment by Cyan2 · 2008-06-16T02:51:49.000Z · LW(p) · GW(p)
No. [<-- Useless single-line flat contradiction that can be deleted without affecting any of the actual arguments. EY.]

Caledonian,

"Wrong. [etc.]"

would probably have gotten the same edit, but I suspect

"On the contrary, [etc.]" "I disagree. [etc.]" "I see a flaw in your reasoning: [etc.]"

would have passed without comment. If Eliezer's edits are bothering you, you might consider trying these or other formulations for your thesis statement.

comment by Cyan2 · 2008-06-16T03:08:29.000Z · LW(p) · GW(p)
But human minds are too complex for us to do this - so we attribute its operations, which we cannot comprehend, to chance. Ignorance is the key.

I'd agree that ignorance is the key, but not that people in general attribute the operations of the human mind to chance. Rather, it seems we attribute these operations to a noumenal essence that is somehow neither deterministic nor random. We make this attribution so readily (and not just to other human minds) because that is our internal experience -- the feeling of our algorithm from the inside.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-16T04:24:38.000Z · LW(p) · GW(p)
Unfortunately, even non-specialists have no difficulty tracing causal chains well into the past.

"Screens off" is a term of art in causal modeling. Technically, the present D-separates the past and future. This is visible in e.g. the counterfactual statement, "If the past changed but the present was held fixed, the future would not change; if the present changed but the past was held fixed, the future would change."

the future is fixed2 before I was born

It makes exactly as much sense to say, "The past is fixed fifty years after I am born."

comment by Hopefully_Anonymous · 2008-06-16T08:21:48.000Z · LW(p) · GW(p)

"I'd agree that ignorance is the key, but not that people in general attribute the operations of the human mind to chance. Rather, it seems we attribute these operations to a noumenal essence that is somehow neither deterministic nor random. We make this attribution so readily (and not just to other human minds) because that is our internal experience -- the feeling of our algorithm from the inside."

Cyan, great comment. You should blog.

comment by Unknown · 2008-06-16T08:28:37.000Z · LW(p) · GW(p)

Cyan: it does not feel "neither deterministic nor random". It just feels random.

comment by JamesAndrix · 2008-06-16T08:47:52.000Z · LW(p) · GW(p)

When I'm particularly torn on a choice, I flip a coin. But I don't always do what the coin says.

If my initial reaction to the result of the flip is to wish that the coin had come up the other way, then I go against it. If my reaction is relief, then I follow the coin. If I still don't care, then I realize that it really is too close to call, and either go with the coin, or pick some criteria to optimize for.

I don't know if this is telling me what I really want, tapping into unconscious decision making processes, or just forcing me to solidify my views in some direction. I will say I'm generally happy with the results.

I'm pondering the origin of minds that ask 'the wrong questions'.

If you wake up thinking 'Should I hunt today, or should I gather?' it's because some part of your brain has plans on how to do those things that it is reasonably confident of. (They are reachable.) Your conscious mind makes the decision that doesn't matter much. The real survival-critical decisions are 'made for you' and are consciously experienced as overwhelming fear, or perhaps dread of boredom or frustration.

Our conscious, semi-logical minds would evolve to be good at making these marginally important decisions, maybe negotiating based on what others are doing. (and upon failure, consulting coin flips or other omens.)

In this scenario we did not evolve conscious minds that are good at generating meaningful questions. This at least fits the data point I started with, but then my brain didn't evolve to be good at considering it's origins, either.

comment by michael_vassar3 · 2008-06-16T09:59:47.000Z · LW(p) · GW(p)

I sense a strong bias here towards the belief that realism = using negative affect terminology especially by Hopefully and Kip. Hopefully also seems to keep trying to insert his very interesting point about confabulation in place of instead of in addition to Eliezer's points about determinism. The neuropsychology of illusory decision procedures however is disturbing to a different disposition than the existence of a future.

comment by Hopefully_Anonymous · 2008-06-16T10:44:47.000Z · LW(p) · GW(p)

Michael, I responded in my blog because I'm overrepresented in recent comments.

comment by Ben_Jones · 2008-06-16T15:43:46.000Z · LW(p) · GW(p)

Kip, I think you've misinterpreted 'the present screens off the past'. Think of it this way: if you knew everything there was to know about one instant of a closed system, you'd be able to extrapolate forwards. Knowing about the instant 'before' would afford you no more knowledge. I think that's what Eliezer's trying to convey.

the idea of an alien/God/machine creating me five seconds ago, implanting within me a desire/value to pick up an apple, and then having the local control to act on that desire/value SCARES THE LIVING FU** OUT OF PEOPLE

It really shouldn't. Yes, in the timeless block universe, there's a difference between being alive for 24 years of nature and nurture, and an alien clicking you into existence. But we don't experience the block, we experience the present. 'Experiencing the block universe' is probably oxymoronic. But in the two scenarios, all the quarks are in the same place. There couldn't even be 'two identical-looking universes'. So why would/should you act differently?

As far as I can tell, Eliezer's reworking of free will simply says 'physics just is, and you're part of it, and knowing that shouldn't change the way you act.' 'Fearing' determinism (or alien intervention) doesn't make any sense; it's like fearing causality.

comment by Caledonian2 · 2008-06-16T16:42:51.000Z · LW(p) · GW(p)

'Fearing' determinism (or alien intervention) doesn't make any sense; it's like fearing causality.
People aren't motivated by facts, but by their models of facts. If there isn't a strong desire to produce accurate models, people will accept or reject models based on whether their implications are troubling to them. This is a fallacy related to the appeal to consequences - in this error, people conflate the rejection of a model with making the implications of that model untrue.

For example, refusing to reject the idea of an immortal soul because you don't want people to cease existing, or cutting short one's losses without acquiring the objective because that means the already-committed resources would have been lost "for nothing".

People DO fear the necessarily implications of models, and will accept or reject models because of that fear.

comment by Hopefully_Anonymous · 2008-06-16T18:48:35.000Z · LW(p) · GW(p)

I'm no longer overepped in the blog's most recent comments, so:

Michael, "the neuropsychology of illusory decision procedures" is relevant to lines from Eliezer like "There - now my sensation of freedom indicates something coherent; and most of the time, I will have no reason to doubt the sensation's veracity" and the paragraphs that followed it in this post, although I agree that it may not be particularly relevant to discussing "the existence of a future".

comment by Caledonian2 · 2008-06-16T19:17:47.000Z · LW(p) · GW(p)

You can't change what a sensation indicates - it's triggered by its sufficient conditions and that's all. There is always reason to question sensations, because they are developed responses by the organism and thus have no logical connection with the states we hope they indicate.

Eliezer is just rationalizing his desire to stop having to think. He makes some statements about a concept, declares his innate emotional response on the subject to be valid, and ceases to inquire.

comment by Q_the_Enchanter · 2008-06-16T19:57:35.000Z · LW(p) · GW(p)

Eliezer, on your construal of free will, what content is added to "I chose to phi" by the qualification "of my own free will"?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-16T22:07:34.000Z · LW(p) · GW(p)

Q, if you're already known to be human, the content added will usually be along the lines of "I chose to phi, and I wasn't threatened into it, and no one bribed me." :)

However, the part I'm interested in is, "I chose to phi, with an accompanying sensation of uncertainty and freedom."

This is the sensation that is brought into intuitive conflict with the notion of a lawful universe. Since part of becoming a rationalist is learning to be a native of a lawful universe, it's important to understand that the sensation of uncertainty and freedom is something that happens within a lawful universe. Since, in fact, nothing precious is lost in acknowledging this, I deem that saying "nothing precious is lost in acknowledging this" may prove helpful to people becoming rationalists.

The empirical content of this sensation is that the choice was not overdetermined so strongly as to require no deliberation, and that several options were labeled "reachable" by your search process, and had to be judged. Where, if we wanted to understand the search process, we would interpret the "reachable" label as meaning "This is what would happen if-counterfactually the judgment of this system were to take this action." Though in this case the search process is an evolved adaptation and was never designed to have an interpretation per se; the mind's code is not commented.

comment by Caledonian2 · 2008-06-16T23:21:27.000Z · LW(p) · GW(p)

I see we're once again trying to delete comments. What fun!

"There - now my sensation of freedom indicates something coherent; and most of the time, I will have no reason to doubt the sensation's veracity"

You cannot change what a sensation is triggered by and thus indicates. The properties of the sensation have no necessary relationship with the stimuli that induce it.

We always have reason to doubt the sensations we experience, precisely because they are manufactured by pre-conscious systems whose operation we do not possess an awareness of. Even a little experience with optical illusions is enough to convince any reasonable person that our sensations are constructed and non-veridical.

The temptation to go along with our feelings without examining them rationally is usually a sign of the drive to minimize the effort of thought. It is an extremely dangerous sign for a rationalist, as it means you've begun to rationalize instead of attempting to be rational.

comment by Fly2 · 2008-06-17T00:20:43.000Z · LW(p) · GW(p)

Vassar: "The neuropsychology of illusory decision procedures however is disturbing to a different disposition than the existence of a future."

Yes. HA's point about neuroscience and the illusion of "I" is largely orthogonal to EY's discussion concerning choice and determinism. However, the neuroscience that HA references is common knowledge in EY's peer group and is relevant to the topic under discussion...so why doesn't EY respond to HA's point?

(Consider an experiment involving the "hollow face" illusion. The mind's eye sees an illusionary face surface. However when asked to touch the mask nose subjects move their finger directly to the sunken surface, the subjects don't hesitate at the illusionary surface. The brain has multiple internal visual representations. Our internal "I" has no direct assess to the visual representation used to direct the finger motion. (One visual pathway goes from the occipital lobes at the rear of the brain upward through the dorsal regions to the frontal lobes. Another visual pathway goes from the occipital lobes downward through the ventral regions and guides the finger movement. The "mind's eye" only has direct access to the information passing dorsally.)

Conscious awareness is only a dim reflection of the brain's computational operation. "I" is a poor model of the human mind.

comment by Hopefully_Anonymous · 2008-06-17T02:14:59.000Z · LW(p) · GW(p)

Fly, great comment. I think the most likely answer is Eliezer isn't as literate in neuroscience/cognitive science as he is in bayesian reasoning, physics, computer programming, and perhaps a few other fields -otherwise examples like the one in your post would find their way as naturally into his posts as examples from the fields he's proficient in. If that's true, the good news is it shouldn't take him any more effort to become literate in neuroscience as in those other fields, and when he is, we'll probably all benefit from his creative approaches to teaching key concepts and to application.

Fly, I highly recommend you start blogging (and critiquing my blog posts when you get a chance)!

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-17T02:27:39.000Z · LW(p) · GW(p)

Eliezer isn't as literate in neuroscience/cognitive science

I was using neuroscience as my fuel for thinking about AI before I knew what Bayes's Rule was.

The reason I'm not responding in this thread is that things like anosognosia, split-brain experiments, fMRI etc. are orthogonal issues to the classical debate on free will, and if I ever handle it, I'll handle it in a separate post.

For now I'll simply note that if an fMRI can read your decision 10 seconds before you know it, it doesn't mean that "your brain is deciding, not you!" It means that your decision has a cause and that introspection isn't instantaneous. Welcome to a lawful universe.

comment by Ben_Hyink · 2008-06-17T05:16:11.000Z · LW(p) · GW(p)

Cases of apperceptive agnosia, and to a lesser extent brains split at a mature stage of development, provide examples of how apperception, and the apperceptive "I" is in fact relevant to performing typical cognitive functions. I try to be careful not to make sweeping blanket statements about features of experience with a variety of uses or subtle aspects (e.g. "self = illusion"; "perception = illusion"; "judgment = illusion"; "thought = illusion"; "existence = illusion"; "illusions = ???" ...now let's just claim that "sense-data" grounds scientific methodology and knowledge, somehow... ).

The fact that information doesn't converge in a single location in the brain does not imply that a functionally coherent "I" is not realized with access to sensory signals (and internally produced sensory imagination) and a capacity to make judgments about such content - even if physics says any instantiation within our domain of knowledge ultimately is timeless, not located in discrete 3D space and with subtle permutations manifested throughout a many-worlds space of causal possibility.

Epistemology (grounds for our ability to know anything at all, albeit without total certainty) precedes ontology (knowledge from scientific sources like empirical observation, logical analysis, mathematical modeling, Bayesian prediction, etc.) and at a deep level epistemology still reduces to basic intuitions of time and space accessed as frames of existence in which functional apperception - however it is instantiated - must integrate components of sensory perception (uni- or multimodal) into coherent physical objects as well as into collections of physical objects in unified perceptions.

Kant's epistemology had major flaws, most centrally his weak attempt to claim his world-access idealism was just as warranted as his world-access realism, but he was right when he claimed that without functional apperception - which probably is achieved largely by temporal coordination in addition to shared access to similar information by different brain regions - we would have as many functionally discrete "I"s as we have elements of experience. On the contrary, personal experience, which we seem to be able to intersubjectively communicate and display via behavior, negates that possibility (even for people with partial disorders of apperceptive access).

Other issues relevant to the claim of a coherent, integrated "self" over longer time scales (from a subjectively "timeful" view of a given causal path) with memory or even among similar paths in the same "time slice" seems to be less substantial though not completely insignificant. However, there seems to be no basis whatsoever for claiming relevant continuity of physical instantiation (i.e. atoms aren't localized and matter may not even "pass through" time).

I should mention I find both the "timeless/multiverse/non-experientially-determined" and "timeful/trajectory/experientially-undetermined" interpretations of physics to be helpful to consider as the "real context" to the best of our knowledge, as a global whole and an imagined/predicted local trajectory of one's experience. The first interpretation offers means of gaining some detachment from the vicissitudes of life and some tolerance for risk and loss (e.g. delivering a campaign speech to a large crowd). The second interpretation promotes the inclination to seek optimal outcomes and reasoned selectivity among a wide array of options (e.g. choosing a better platform than "Free beer and toilet paper!" - unless one is running for a student office on a campus where satire sells).

comment by Hopefully_Anonymous · 2008-06-17T08:43:01.000Z · LW(p) · GW(p)

Ben, Great comment. Requests:

  1. Start blogging!
  2. Please visit and critique my blog in the comments.
  3. Get your Ph.D. (you could do worse than Robin or Nick as thesis advisors).
comment by Ben_Jones · 2008-06-17T09:43:43.000Z · LW(p) · GW(p)

Fly/Ben/Eliezer/All,

If you were to have your brain ported to another substrate, would you demand that the neurons that recognise that 'illusory surface' be ported before you could accept that the upload is 'you'? Or would you say that's an unproductive way of looking at it?

comment by Caledonian2 · 2008-06-17T10:04:04.000Z · LW(p) · GW(p)

The reason I'm not responding in this thread is that things like anosognosia, split-brain experiments, fMRI etc. are orthogonal issues to the classical debate on free will, and if I ever handle it, I'll handle it in a separate post.
Actually, the classical debate on that topic seems to be founded on our perception of ourselves as a unified being - when confronted with actions for which we cannot provide a causal explanation, we each say "I chose to do that" - and yet we have good reason to doubt that the systems responsible for making that statement weren't actually involved in the decision-making process.

If our sensation of making choices comes long after the choice is actually made, if it doesn't have anything to do with the act itself and our decision comes without any corresponding sensation entering our awareness, we don't need convoluted reasoning that tries to justify accepting our feelings as valid. We can just discard them as invalid and move on.

Why bother fiddling with the Gordian knot when we can just cut through it? We don't need to make any particular assertions about the lawfulness of the universe or the predetermination of events - we don't need any further assumptions at all. Our perceptions, and most especially our mental self-perceptions, are not veridical. Once we acknowledge that we do not need to account for whatever convictions give rise to the classical debate on free will, because the burden of demonstration then falls on those who insist that it indicates something in particular.

comment by mitchell_porter2 · 2008-06-17T11:43:10.000Z · LW(p) · GW(p)

Caledonian: "Our perceptions, and most especially our mental self-perceptions, are not veridical. Once we acknowledge that we do not need to [do stuff]"

Do you think "our perceptions, and most especially our mental self-perceptions" are completely valueless? If not, where do you draw the line between valid and invalid?

comment by Caledonian2 · 2008-06-17T12:07:01.000Z · LW(p) · GW(p)

Do you think "our perceptions, and most especially our mental self-perceptions" are completely valueless?
Perceptions in general aren't completely useless. Mental perceptions - yes, totally worthless. Introspection tells us nothing of value, because our minds never had a need to accurately represent themselves to any degree and so are not designed to be able to do so. Result: garbage data. Even sensory perception is questionable. Despite its concerning the external, objective world, which clearly produces strong selection pressure for effective interaction and therefore at requires least a limited degree of accurate representation, we know that our senses contain peculiar weaknesses and possess processing limitations that result in 'illusions'.

It may be the case that these illusions arise from shortcuts taken in the pathways that lead to memory representation, but not taken in the pathways responsible for action. See the reference to the recessed/expressed "Hollow Face" illusion above. The consciousness-preceding systems that attempt to compensate for sensory limitations - like the ones responsible for "filling-in" our retinal blind spot by trying to interpolate what's in the blank space - may also provide awareness at some level of those flaws, but not at the level of our self-identity.

Perceptions can be trusted to the degree that multiple forms/instances of perception, each with different flaws and limits, can be engaged in. Only with multiply redundant checks can be made, and the results compared, can we even provisionally trust what we sense. For all we know, there are cognitive illusions that we've never even recognized because we possess no means of explicitly comparing the perception to others with complimentary flaws.

Our feelings, most especially about the content of our thoughts and the validity of arguments, can be trusted not at all. The ancient Greeks mistook the feeling of completeness or closure as demonstration that a line of argumentation was valid, and look what that got them.

comment by mitchell_porter2 · 2008-06-17T12:48:52.000Z · LW(p) · GW(p)

Caledonian, the trouble with denying any validity at all to introspective perception is that it would imply that consciousness plays no role in valid cognition. And yet consider the elaborate degree of self-consciousness implied by the construction of the epistemology you just articulated! Are you really going to say you derived all that purely from sense perception and unconscious cognition, with no input from conscious reflection?

comment by Ben_Hyink · 2008-06-17T17:06:00.000Z · LW(p) · GW(p)

Caledonian,

Philosophy has developed quite a bit since the Greeks started the Western tradition and I wasn't invoking Greek traditions but I don't recall the ancient skeptics getting very far.

The saying "Scientists need philosophy of science [and epistemology] like birds need ornithology" is true in a practical sense but dismissing the whole topic as irrelevant is unwarranted. Ignoring epistemological issues may be pragmatic depending on one's career but lack of attention doesn't resolve epistemological issues.

Through reason we can use our senses to discover flaws in our sensory systems and intuitions about the world (as well as empirically confirm the existence of cognitive biases). However, we could never have begun to make such discoveries in this world if our reason had no access to sensory perceptions or if our sensory perceptions were not accessible in a framework of space and time offered as "basic intuitions." Whatever may exist beyond our access, our kind of experience in which we interact with physical objects outside of the direct and complete control of our imagination implies that some kind of world external to ourselves in which spatiotemporal kinds of interactions can occur exists, regardless of whether it is a "simulation" or something unfamiliar overlying a deeper reality. AGI programmers simulate a spatial world in which a young AGI system can operate temporally in part to verify actual learning is achieved, and do so in ways we can recognize based on how we learn about our environment.

Ultimately, little or no part of our experience can be cast into doubt save for immediate, transitory experience (including the experience of remembering). Everything else, including the memory of recent immediate experience used for purposes of analysis, can be doubted as a complete illusion because our minds only have direct observational access to the present (Hume). However, while "absolute" knowledge and certainty is beyond the access of minds like ours, our experiences have a sufficient amount of regularity (e.g. the unity of apperception) and predictability to allow us to reach judgments about the conditions of our day-to-day reality (e.g. locating a doorknob, expecting a sunrise) and subject questions to formal scientific methods that offer much higher degrees of warranted confidence. Whatever they believed, the only "knowledge" people have ever had applies within their domain of access as spatiotemporal beings with reason and an ability to manipulate their environment - whether or not deeper truth lies beyond it - but that scope of warrant is fully sufficient for purposes relating to their domain of experience. This view, with some other components such as arguments to cast doubt on solipsistic beliefs, is a version of "pragmatic realism."

Sorry for veering a bit off-topic but I thought epistemology was relevant to the idea of consciousness just consisting of "illusions." The prevailing cognitive science view these days seems to be that "perception = a kind of illusion." My response is, "no and yes" - sensations are vital means of accessing the reality of an external world that have interpretative biases (e.g. color vision) as well as inaccuracies and quirks (e.g. blind spots, blindsight, saccades).

Ben Jones,

I'm not sure I understand the question; I don't see personal identity v. non-identity as a binary distinction but a fuzzy one. While artifacts and characterizing information can be thought of as a form of extended identity I think sustaining relevant kinds of functional processing to produce awareness and self-awareness somewhat like what we experience would be important for creating a similar subjective experience, but over the long run the manner of information processing might become very different (hopefully enriched and more expansive) from what realizes our kind of experience. Ben Goertzel has shared some useful perspectives on the future of uploaded human minds over the long run, such as running <99% on post-human programs, swapping human life memory files (preferably from a very large and highly diverse selection), perhaps eventually finding no compelling reason not to dissolve increasingly artificial barriers between individual identities.

comment by Caledonian2 · 2008-06-17T17:46:00.000Z · LW(p) · GW(p)

Ignoring epistemological issues may be pragmatic depending on one's career but lack of attention doesn't resolve epistemological issues.
Rejecting the concept as incoherent, however, resolves those issues quite nicely. How can you generate knowledge about knowledge without having a definition for the subject matter and a presumed method of generation and evaluation already? You can't consider the questions without taking their answers for granted.

comment by Fly2 · 2008-06-17T18:18:00.000Z · LW(p) · GW(p)

HA, Ben Jones

I appreciate the compliment and your interest in my views, however, for now, I would rather read what others have to say on this topic.

comment by Ben_Hyink · 2008-06-17T21:56:00.000Z · LW(p) · GW(p)

Calderon,

As I said, you can accomplish quite a lot without delving far into the subject but writing it off may leave you with a less-than-optimal framing of reality that just might leave you vulnerable to reaching inaccurate conclusions about important topics like whether to state "all perception is illusion" instead of qualifying the claim before an eccentric who buys it draws conclusions from that premise which make him or her less inclined to try to model reality accurately or act in ways that presume a lawful external world.

Of course we bring knowledge and skills to the problem that are obtained in part through the senses and stored in memory (explicit and implicit). Zen-like meditation would not allow you to analyze anything while you were doing it. Fortunately, a number of great historical thinkers have painstakingly analyzed what immediate subjective perception might tell us about the nature of our kind of reality (presuming we share the same relevant aspects of experience, which virtually every sane non-"sensible knave" claims they do), carefully developed theories about implications, ruthlessly critiqued and revised theories of predecessors, and eventually some of the forgotten or poorly interpreted work was dusted off and subjected to the tests available in a more contemporary time that coexisted with cognitive neuroscience (many implications of Kant's functionalist theories about basic aspects of the mind and world-access were exhumed and reinterpreted by non-antiquarian philosophers over the last few decades). Between the 1880s and 1980s some fatally flawed theories and theoretical frameworks (e.g. logical positivism) were developed by people whose only exposure to Kant may have been from antiquarians pushing his "transcendental idealism."


"How can you generate knowledge about knowledge without having a definition for the subject matter and a presumed method of generation and evaluation already? You can't consider the questions without taking their answers for granted."

Kant's basic epistemological question was "What can I know?" or how can any judgment I make, including empirical claims, be warranted in the face of deep skepticism of Hume, who had undermined the basis of Cartesian Rationalism and Leibnitz's elaboration by offering a compelling argument not just that reason could extend to metaphysical entities (which Kant later acknowledged, more or less) but that there was no empirical basis for knowledge because we could only directly access the present rather than the past or future and moreover the only thing about immediate experience that were the collection of individual, transient sensory impressions at any given moment.

What was taken for granted by both Hume and Kant was that we seem to have warranted access to the world but Hume claimed we have absolutely no basis for any claims we make about the world and we just act out of custom and habit, and unsubstantiated beliefs - including beliefs that merely seem to have empirical support. Kant couldn't take for granted that he could offer a better justification for such warrant than Hume - in fact he said Hume's ideas awoke him from his dogmatic slumber.

Hume's non-physiological account for how we could gain immediate experiences of number did not acknowledge (1) that the sensations had to be encountered in a spatiotemporal way by us regardless of the actual physics of the matter, an aspect that is just an irreducible "given" or basic intuition or (2) that acts of judgment - however implemented - need to be performed on any transient sensory impressions to perceive them as we do in our sensory perceptions let alone attribute any meaning or temporal context (e.g. in music) to them because no such interpretations are inherent in transient sensation signals. Kant employed religious language in his book for ideas that can be accepted from a secular perspective, such as the world-access expression "transcendental synthesis" to describe the necessity of intellectual acts of judgment on sensory appearances to achieve perception.

I would need to write a great deal more to provide a clear and compelling case for the claims that follow, but for a single paragraph synopsis, here goes...

The crucial importance of some form of judgment in the conversion of raw sensation into the kinds of perceptions we continually seem to have offered Kant leverage in his effort because judgment is an intrinsic part of even transient perceptual experience rather than being something detached from it; therefore, the use of judgment was no less warranted than the use of the sensory appearances (including imagination) and they only functioned well in combination. He offered an argument that the nature of our perceptions regarding things we interact with in our environment, external to what is under the complete and direct control of our minds (e.g. imagination), supports the existence of "physical objects" meeting desired criteria as well as the larger context of a lawful physical universe. Moreover, to function effectively and achieve goals, our kinds of minds require interaction with an external physical world (taking this as a premise Hegal offered additional arguments against solipsism based on the means by which humans learn from one another). While the types of abstract judgment he cites as being possible based on the demonstrated ability of people to do them may not be exhaustive or universal they provided warrant for the kinds of theoretical work Kant and other intellectuals had done, as well as the practice of scientific inquiry.

- Are all the arguments in each case water-tight?
That's doubtful. Kant's idealism certainly was flawed.

- Are there fatal flaws?
I haven't noticed them in the arguments on which I focused and the cases can be considered separately rather than a completely interdependent system.

- Why bother with all this?
Aside from the modest utility of the functionalist phenomenological insights (and avoiding flawed models of our own minds), any epistemology that starts at a shallow level with "sense data" (sensory perception, w/o considering the functional judgment involved in transforming sensations into perceptions) remains open to the similar lines of attack as used by Hume and other skeptics. Kant's world-access realist work not only outlined the limits of human reason (the physical universe) but defended against skeptical attacks our warrant in claiming to be capable of gaining and possessing knowledge.

comment by Hopefully_Anonymous3 · 2008-06-17T22:47:00.000Z · LW(p) · GW(p)

"perhaps eventually finding no compelling reason not to dissolve increasingly artificial barriers between individual identities."

No thanks, Ben. I've got to wonder, why isn't it enough just to solve aging and minimize existential risk? If I were the administrator of a turing test to see if you were a subjective conscious entity like me, this is the point where you'd fail.

comment by mitchell_porter2 · 2008-06-18T01:47:00.000Z · LW(p) · GW(p)

Caledonian, the science of physiology and evolution may have played a large role in the creation of your epistemology, but I don't doubt that you also personally thought about the issues, paid attention to your own thinking to see if you were making mistakes, and so forth. Anyway, there's no need to play the reflexive game of "you would have used introspection on your way to the conclusion that introspection can't be used", in order to combat the notion that introspection is completely unreliable. If it were completely unreliable you would never be accurate even when reporting your own opinions, except perhaps by chance.

You might be able to defend your position by saying that all partly reliable self-knowledge comes through sensory, quasi-sensory and proto-sensory modalities, and that it's only a specific sort of self-"perception" that is 100% unreliable.

comment by Caledonian2 · 2008-06-18T03:19:00.000Z · LW(p) · GW(p)

If it were completely unreliable you would never be accurate even when reporting your own opinions, except perhaps by chance.
I didn't say it was completely unreliable. I said it was completely useless.

As the saying goes, a stopped clock is right twice a day, while a working clock is unlikely to be accurate within the limits of measurement. However, the stopped clock is extraordinarily reliable - reliably useless, because it really tells us nothing at all about what time it is. The working clock may not either, but it could at least potentially be a useful guide.

comment by Ben_Jones · 2008-06-18T11:04:00.000Z · LW(p) · GW(p)

I don't see personal identity v. non-identity as a binary distinction but a fuzzy one.

Agreed, and a view I've espoused here in the past. My question was actually intended to demonstrate this.

We have to ask ourselves what is our strategy for getting around our horribly skewed lenses onto the world, and onto the mind. I just think that saying 'everything we can think is almost certainly wrong' is a bad start. Where do you go from there? What do you compare your pre-conscious sensory perceptual data to to make sure it's correct?

I don't want to have to train myself not to think, and only to measure. That would take all the fun out of it.

comment by Phillip_Huggan · 2008-06-18T20:39:00.000Z · LW(p) · GW(p)

"No, you have to be the ultimate source of your decisions. If anything else in your past, such as the initial condition of your brain, fully determined your decision, then clearly you did not."

Once again, a straw man. Free will might not exist but it won't be disproved by this reasoning. People that claim free will don't claim 100% free will; actions like willing your own birth. Free will proponents generally believe the basis for free will is choosing from among two or more symbolic brain representations. If the person read a book about the pain of being burned to death, in the few seconds between past contemplating self and present decisive self, than the straw man holds.
In the above example, if the fear of fire is instinctive, no free will. If it is attained through symbolic contemplation in the past of what one would do in such a circumstance or how one values neighbourhood civilian lives, or one's desire to be a hero or celebrity, then at least the potential for free will exists.
Once again, free will does not mean willing your own existence, it means choosing from brain symbols in a way that affects your future (if free will exists). I expect to post the exact same argument here on different threads repeatedly ad neaseum, that free will does not mean willing your own birth (or willing your own present or future, or willing the universe).
I'll ask again, don't tachyons induce feedbacks that destroy the EY concept of a "block MWI universe"?

comment by Cyan2 · 2008-06-18T21:10:00.000Z · LW(p) · GW(p)

Phillip Huggan, the wikipedia article on tachyons answers your question. Extremely short version (granting tachyons exist): for a tachyon, there is no distinction between the processes of emission and absorption. The attempt to detect a tachyon from the future (and violate causality) would actually create the same tachyon and send it forward in time.

comment by mitchell_porter2 · 2008-06-19T01:58:00.000Z · LW(p) · GW(p)

Caledonian: "I didn't say it was completely unreliable. I said it was completely useless."

I'm surprised you didn't take my second option and moderate your position. Whether you are insisting that introspection is only ever accurate by coincidence, or just that whatever accuracy it possesses is of no practical utility, neither position bears much relationship to reality. The introspective modalities, however it is that they are best characterized, have the same quality - partial reliability - that you attributed to the external senses, and everyone uses them every day to get things done, and in doing so they are not just rolling dice.

Even the argument that a capacity for accurate self-representation has never been selected for is questionable, in a social primate which uses language to communicate and coordinate.

comment by Ben_Hyink · 2008-06-19T06:08:00.000Z · LW(p) · GW(p)

Fly,

You're right that if a portion of the brain or CNS had "awareness" or even reflective "consciousness" then the united apperceptive "subject of experience(/thought/action)" might be completely unaware of it. I think the connectionist philosophers Gerard O'Brien and Jon Opie have mentioned that possibility, though I don't think they suggested there was any reason to believe that to be the case. They have written some interesting papers speculating on the evolutionary development of awareness and consciousness. (Btw, Kant acknowledged that animals without self-awareness might still be expected to have similar apperceptive unity, they just lacked the ability to abstractly reflect on their experience or develop logical proofs to show their access to external reality could not be a /complete/ illusion, as traditional idealism claims to be possible and empirical science alone would not be able to refute).

Caledonian, others,

There seems to be a misunderstanding of the purpose (and utility) of reflecting on the awareness as consciously perceived. The purpose was to characterize subjective perception and develop /abstract requirements/ completely independent of implementation (e.g. the eye, visual processing pathways) in order:

(1) to provide a compelling argument that our perceptions must come from an external source
and
(2) to establish rational warrant in making any claims about the reality of our interaction with the world external to our minds (including one's own body) - though science can help us eliminate false interpretations and illusory aspects of our perceptual systems, as well as the brain areas and activities correlated with perception

Subjective self-report is commonly incorporated into scientific research because modern science recognizes it as an aspect of reality. Some basic elements of subjective experience are described identically by all people without brain damage causing apperceptive agnosia. Moreover, apperception and the unity of apperception can be falsified behaviorally by demonstrating capacities or the lack thereof.

People blind since birth who have regained sight through medical advances later in life have had difficulty making any sense of visual information, including shapes and objects. People with apperceptive agnosia cannot see more than one object at a time and function virtually as though they are blind.

Again, I didn't claim world-access realist arguments prove we aren't living in a simulation, or even that conditions might not radically change in the future (yes, the "eye does not see the eye" in the sense that we don't directly access the future or past as well as in the physiological sense I suspect was intended), but presuming the perceptual and cognitive functions we seem to experience performing routinely do exist - and we don't have compelling reasons to doubt that - then we do live in a universe with a reality external to our own minds (the effort to reach some absolute grounding for epistemological realism was a pipe dream). It is a pragmatic realist perspective from a world-access perspective and I think it is the deepest, most robust proof of realism we can hope for because it focuses on what conclusions we can draw merely from analyzing our form of conscious perceptual access to the world, prior to reliance on empirical tools accessed /through/ basic conscious functions.

I don't expect most scientists or engineers who take epistemological realism as a premise to find the arguments interesting or relevant to their needs, which it typically is not. However, their epistemological models of reality technically remain vulnerable to Hume's skeptical critiques and some popular broad-brush claims made like "perceptions are illusions" carry epistemological baggage that most scientists wouldn't accept if confronted by it in detail with fleshed-out arguments.

I won't bring up this topic again here. : )

comment by Caledonian2 · 2008-06-19T13:21:00.000Z · LW(p) · GW(p)
Whether you are insisting that introspection is only ever accurate by coincidence, or just that whatever accuracy it possesses is of no practical utility, neither position bears much relationship to reality.

[You are factually incorrect.] We've tried developing models of human psychology by relying on introspection - it was in fact the first approach taken in the modern age. Most researchers abandoned it soon after it became clear that our experience of our cognition did not in fact permit useful models to be generated. We've had more than a hundred and fifty years, and that approach has not been the least bit fruitful.

The introspective modalities, however it is that they are best characterized, have the same quality - partial reliability - that you attributed to the external senses

No, our senses are much more consistent, both in themselves and with the properties of the world that we interact with regularly, than our self-awareness.

and everyone uses them every day to get things done

[No.] I have no idea where you picked up the belief that our cognitive self-reflection is an important part of our decision-making, but it has very little to do with reality.

[Last warning before I start deleting posts again, Caledonian. Kick the habit. -- ESY.]

comment by Ben_Hyink · 2008-06-19T15:31:00.000Z · LW(p) · GW(p)

Clearly it's a waste of time to try to have a reasoned debate with someone not even willing to consider one's arguments but rather intent on misrepresenting them as directed toward purposes for which they never were intended to serve (e.g. a fleshed-out psychology or comprehensive analysis of the perceptual system).

It's a shame you haven't read Hume's skeptical critiques of empirical claims of "fact," but as I said before, deep epistemology isn't of interest to everyone and isn't relevant to the vast majority of scientific claims that can be made.

Peace.

comment by Hopefully_Anonymous3 · 2008-06-22T10:39:00.000Z · LW(p) · GW(p)

Interesting old mindhacks article touches on some of these themes (how we arrive at certainties/decisions):

http://www.mindhacks.com/blog/2008/05/five_minutes_with_ro.html

comment by Mass_Driver · 2010-11-21T06:17:39.068Z · LW(p) · GW(p)

This sensation of freedom occurs when I believe that I can carry out, without interference, each of multiple actions, such that I do not yet know which of them I will take, but I am in the process of judging their consequences according to my emotions and morals.

This is a very good definition of free will, and it is way more sensible than claiming to be the "only and ultimate source" of one's own actions, but there is a notion in the Greek and Judaic traditions of being able to rise above one's fate that isn't quite captured by it.

To put this kind of 'transcendence' into your terminology, we might know all of the macro-level influences on a person -- parents, friends, media, genes, nutrition, etc. -- and still be surprised at what sort of person they turn out to be. Sometimes people appear to rise above (or sink below, or just act in strange ways that seem unrelated to) their circumstances.

If you (a) buy reductionism, and (b) have a complete model of the local universe with subatomic resolution, then we shouldn't be surprised to observe anything about human behavior, because the model would fully predict and account for the behavior.

Likewise, one might just think that, in practice, we never do know all of the influences on a person and then assemble them according to anything like a scientific model of the human personality, so that while we often think we are in a position to make a confident prediction about what person X will do next, we are simply mis-calibrated and overconfident.

I think it is an interesting hypothesis, though, to propose that even if we did have all of the macroscopic information about a human personality and a good set of psychological rules for assembling that data into coherent predictions, we would still be surprised sometimes by the behaviors we actually observed.

This hypothesis is interesting if and only if we can get such a dataset and such an algorithm well before we are capable of getting a complete model of the local universe with subatomic resolution. I think we probably can.

comment by lukeprog · 2011-07-18T05:13:07.289Z · LW(p) · GW(p)

Study from computational neuroscience on how the brain might use Bayes to think about possibility and couldness, here.

comment by Dojan · 2011-12-25T16:34:53.488Z · LW(p) · GW(p)

Imagine a ball rolling down a pipe. Att one point the pipe forks, and at that point there is a simple mecanical device that sorts the balls according to size: all balls larger than 4 cm in diameter go left, all smaller ones go right. Let this be the definition of a "choice" (with the device as the agent) for the following argument, and let "you" define a certain arangement of atoms in Eliezers block-universe-with-glue. Then "you" will be what "decides" every time you make a choice; trivially so, given those definitions.

My question is, what other definitions could we use to reach a different conclution?

I mean, if you define "free" in "free will" as "not goverend by physics", or "you" as different from the (or some part of the) structure of your atoms, we are having a different debate here.

comment by Dojan · 2011-12-25T16:41:13.615Z · LW(p) · GW(p)

Also, knowing that the book I'm reading is of a deterministic nature, doesn't make me any less interested in knowing how it ends.

comment by ialdabaoth · 2014-07-05T07:43:01.521Z · LW(p) · GW(p)

Certainly I do not "lack free will" if that means I am in jail, or never uncertain of my future decisions, or in a brain-state where my emotions and morals fail to determine my actions in the usual way.

A question I would like advice from others on:

I frequently find myself in a brain-state where my emotions and morals fail to determine my actions in what most people call the 'usual way'. Essentially, at certain times I am "along for the ride", and have no capacity to influence my behavior until the ride has come to a full and complete stop. I assert that this is usually triggered by external stimulus, but I acknowledge, dear reader, that you have no reason to accept that excuse.

What is the correct thing to do in these situations, given that there is no possibility to choose what to do in these situations - and given that others have no reason to accept "sorry about my freak-out, I have PTSD" as anything but a craven attempt to turn bad behavior into a ploy for sympathy?

Replies from: Sophronius
comment by Sophronius · 2014-07-05T13:39:18.479Z · LW(p) · GW(p)

Yes, being in a brain-state like that can be really surreal, but it does happen. Heck, I've been in brain-states where I literally couldn't parse sentences and still have people be offended because I'm 'not listening'. Saying "look I know I seem conscious but my brain literally does not work right now, please come back later" never helps.

The correct solution, of course, is to anticipate this happening in advance and choose your actions so as to get the desired outcome while taking into account that you may lose control. People are also MUCH more likely to listen if you warn them in advance, because then it doesn't seem so much like an excuse. People are also much more likely to be reasonable when they're not in a group.

I will, however, issue a very important warning: The stance you take on this issue may strongly influence how often it happens. If you believe "I just don't have the willpower to do X", you will have less willpower available and it becomes a self-fulfilling prophesy. I think this goes a long way to explain WHY "I couldn't control myself!" isn't accepted as an excuse, generally speaking. (the rest is lack of imagination/refusal to acknowledge that people can be different). I think one of the greatest flaws on Less Wrong currently is that people do not sufficiently acknowledge that your beliefs directly influence how well your brain performs.

comment by [deleted] · 2015-03-29T17:21:52.004Z · LW(p) · GW(p)

So I was kind of dissappointed when I read about "the solution to free will" in this article, since I already seemed to have figured out the answer! This I did during the first minute I tried to come up with something, as urged to in dissolving the question.

What I came up with was this: What I perceive as "me" is every component that is cooperating inside me to create this thought- process. My choices are the products of everything that has ever happened to me. I am the result of what has happened before these self-reflections of mine took place.

After reading in this sequence, it really seems to me as if the answers to the questions "what is choice" and "what is will", is determinism. Or casuality + randomness, whatever randomness is. When I read dissolving the question I noted this sentence: Your assignment is not to argue that free will is compatible with determinism, or not. This gave me the impression that the solution didn´t care about whether the universe is deterministic or not. And now it seems to me that a deterministic universe was the answer all along. Have I missed something essential?

This argument will illustrate why I feel like not having resolved the question...

If someone blames me for killing someone I can say: Hey, I can´t help what happened before I was born. This was bound to happen, and even if it wasn´t, there was nothing anyone could do about it anyway, since the universe made this situation up. I can´t go back in time and alter every factor that led up to this. If the parts that is "me" could and should have stopped this from happening, I would not have killed him.

Now this is an old argument for why I have a destiny and am free from responsibility, if my will is determined by factors that I can´t override.

Replies from: None
comment by [deleted] · 2015-03-29T23:00:04.627Z · LW(p) · GW(p)

Your assignment is not to argue that free will is compatible with determinism, or not. This gave me the impression that the solution didn´t care about whether the universe is deterministic or not.

I believe the point of this meditation was more of getting to "I think I have free will because I have this planning algorithm running inside me and it feels like there are multiple choices which are reachable from where I currently am".

The determinism of the universe is necessary for the whole explanation, but it doesn't particularly pertain to why you feel you have free will in the first place.

This argument will illustrate why I feel like not having resolved the question...

It seems like you still have the idea that you should be able to influence the universe from outside the laws of physics, if you had free will. Like there being two distinct causes for the outcomes: You and The Rest of The Universe. Working together to bring about the future.

But since you are within physics the fact that physics uniquely determined the outcome, this does not mean that you yourself had no influence on the outcome.

Or maybe this link on compatibilism offers you a better explanation .

Replies from: None
comment by [deleted] · 2015-03-29T23:16:18.796Z · LW(p) · GW(p)

This didn´t help me though but thanks for trying. I understand that I myself is a cause and that my choices are the effects of that cause. And I also understand the argument that "myself" is something created of many things outside of myself, in harmony with the laws of physics. My argument for destiny still stands though. If determinism is how you explain choice and will (free will is just a pointless word I think) I understand that. But then you have to agree that my argument is valid right?

This was bound to happen, and even if it wasn´t, there was nothing anyone could do about it anyway, since the universe made this situation up.

Meaning that what anyone actually did about something was predetermined by many factors that we don´t know all of.

comment by Aditya (aditya-prasad) · 2020-01-22T21:03:22.663Z · LW(p) · GW(p)

So in this interpretation of the word "free will", even AI would have the same free will humans have?

Am I correct in thinking that I am not the computing machine but the computation itself? If it was possible to predict my behaviour they would have simulate an approximation of me within themselves or within the computer?

I am interested in what the implications this has in how hard or easy it is to manipulate other humans. How increasingly with companies gaining access to a lot of data and computing power can they start to manipulate people at very fine levels?

Till today we had mass media marketing where they sought to influence a wide demographic but now it is possible to do the same on an individual level. So its scary to think that my desire for items are being implanted in me by commercials, my vote is a result of them figuring our what scare me, etc