Would Your Real Preferences Please Stand Up?

post by Scott Alexander (Yvain) · 2009-08-08T22:57:09.266Z · LW · GW · Legacy · 132 comments

Contents

  Footnotes:
None
132 comments

Related to: Cynicism in Ev Psych and Econ

In Finding the Source, a commenter says:

I have begun wondering whether claiming to be victim of 'akrasia' might just be a way of admitting that your real preferences, as revealed in your actions, don't match the preferences you want to signal (believing what you want to signal, even if untrue, makes the signals more effective).

I think I've seen Robin put forth something like this argument [EDIT: Something related, but very different], and TGGP points out that Brian Caplan explicitly believes pretty much the same thing1:

I've previously argued that much - perhaps most - talk about "self-control" problems reflects social desirability bias rather than genuine inner conflict.

Part of the reason why people who spend a lot of time and money on socially disapproved behaviors say they "want to change" is that that's what they're supposed to say.

Think of it this way: A guy loses his wife and kids because he's a drunk. Suppose he sincerely prefers alcohol to his wife and kids. He still probably won't admit it, because people judge a sinner even more harshly if he is unrepentent. The drunk who says "I was such a fool!" gets some pity; the drunk who says "I like Jack Daniels better than my wife and kids" gets horrified looks. And either way, he can keep drinking.

I'll call this the Cynic's Theory of Akrasia, as opposed to the Naive Theory. I used to think it was plausible. Now that I think about it a little more, I find it meaningless. Here's what changed my mind.

What part of the mind, exactly, prefers a socially unacceptable activity (like drinking whiskey or browsing Reddit) to an acceptable activity (like having a wife and kids, or studying)? The conscious mind? As Bill said in his comment, it doesn't seem like it works this way. I've had akrasia myself, and I never consciously think "Wow, I really like browsing Reddit...but I'll trick everyone else into thinking I'd rather be studying so I get more respect. Ha ha! The fools will never see it coming!"

No, my conscious mind fully believes that I would rather be studying2. And this even gets reflected in my actions. I've tried anti-procrastination techniques, both successfully and unsuccessfully, without ever telling them to another living soul. People trying to diet don't take out the cupcakes as soon as no one else is looking (or, if they do, they feel guilty about it).

This is as it should be. It is a classic finding in evolutionary psychology: the person who wants to fool others begins by fooling themselves. Some people even call the conscious mind the "public relations officer" of the brain, and argue that its entire point is to sit around and get fooled by everything we want to signal. As Bill said, "believing the signals, even if untrue, makes the signals more effective."

Now we have enough information to see why the Cynic's Theory is equivalent to the Naive Theory.

The Naive Theory says that you really want to stop drinking, but some force from your unconscious mind is hijacking your actions. The Cynic's Theory says that you really want to keep drinking, but your conscious mind is hijacking your thoughts and making you think otherwise.

In both cases, the conscious mind determines the signal and the unconscious mind determines the action. The only difference is which preference we define as "real" and worthy of sympathy. In the Naive Theory, we sympathize with the conscious mind, and the problem is the unconscious mind keeps committing contradictory actions. In the Cynic's Theory, we symapthize with the unconscious mind, and the problem is the conscious mind keeps sending out contradictory signals. The Naive say: find some way to make the unconscious mind stop hijacking actions! The Cynic says: find some way to make the conscious mind stop sending false signals!

So why prefer one theory over the other? Well, I'm not surprised that it's mostly economists who support the Cynic's Theory. Economists are understandably interested in revealed preferences3, because revealed preferences are revealed by economic transactions and are the ones that determine the economy. It's perfectly reasonable for an economist to care only about those and dimiss any other kind of preference as a red herring that has to be removed before economic calculations can be done. Someone like a philosopher, who is more interested in thought and the mind, might be more susceptible to the identify-with-conscious-thought Naive Theory.

But notice how the theory you choose also has serious political implications4. Consider how each of the two ways of looking at the problem would treat this example:

A wealthy liberal is a member of many environmental organizations, and wants taxes to go up to pay for better conservation programs. However, she can't bring herself to give up her gas-guzzling SUV, and is usually too lazy to sort all her trash for recycling.

I myself throw my support squarely behind the Naive Theory. Conscious minds are potentially rational5, informed by morality, and qualia-laden. Unconscious minds aren't, so who cares what they think?

 

Footnotes:

1: Caplan says that the lack of interest in Stickk offers support for the Cynic's Theory, but I don't see why it should, unless we believe the mental balance of power should be different when deciding whether to use Stickk than when deciding whether to do anything else.

Caplan also suggests in another article that he has never experienced procrastination as akrasia. Although I find this surprising, I don't find it absolutely impossible to believe. His mind may either be exceptionally well-integrated, or it may send signals differently. It seems within the range of normal human mental variation.

2: Of course, I could be lying here, to signal to you that I have socially acceptable beliefs. I suppose I can only make my point if you often have the same experience, or if you've caught someone else fighting akrasia when they didn't know you were there.

3: Even the term "revealed preferences" imports this value system, as if the act of buying something is a revelation that drives away the mist of the false consciously believed preferences.

4: For a real-world example of a politically-charged conflict surrounding the question of whether we should judge on conscious or unconscious beliefs, see Robin's post Redistribution Isn't About Sympathy and my reply.

5: Differences between the conscious and unconscious mind should usually correspond to differences between the goals of a person and the "goals" of the genome, or else between subgoals important today and subgoals important in the EEA.

132 comments

Comments sorted by top scores.

comment by CronoDAS · 2009-08-09T19:58:30.319Z · LW(p) · GW(p)

When I procrastinate over a task, it's usually because I'm in a situation like this:

1) I want something to have been done and 2) I don't want to experience doing it.

To use the classic example, I want to have done my homework but I don't want to be doing my homework.

Replies from: SilasBarta
comment by SilasBarta · 2009-08-09T20:59:09.998Z · LW(p) · GW(p)

Well, it's not so mysterious when you put it that way :-(

Replies from: CronoDAS
comment by CronoDAS · 2009-08-09T21:24:09.348Z · LW(p) · GW(p)

Consider the case of a hungry rat that sees food on the other side of an electrified floor. The rat wants to minimize its discomfort. It wants to not get shocked, and also wants not to be hungry.

A moderately stupid rat will compare the pain of its current hunger to the pain of crossing the floor. When its pain from hunger becomes as strong as the pain of crossing the floor, it'll decide to cross the floor.

A smarter rat will realize that it'll have to cross the floor eventually, and so will minimize its total pain by crossing immediately, so its hunger doesn't have a chance to build to a painful level.

A really stupid rat will notice that, when it steps onto the electrified floor, its current pain equals the sum of its pain from hunger and the pain from the shock. As this total is always greater than the pain from hunger alone, it'll never step on the electrified floor and it will starve to death.

When it comes to homework, my decision-making algorithm seems to act like the first rat...

Replies from: nazgulnarsil, taryneast
comment by nazgulnarsil · 2009-08-10T07:05:08.355Z · LW(p) · GW(p)

the first example reminds me of Caplan and Kyklos' explanation of poor people and crime.

http://econlog.econlib.org/archives/2007/06/why_do_the_poor.html

comment by taryneast · 2011-07-26T16:17:16.805Z · LW(p) · GW(p)

Hmmm good point. When I first began at university, my homework-reflex acted like rat#1. Eventually I trained myself to think like rat#2 in many cases - that improved things a lot.

I still battle the rat#1 tendencies and try to force myself to realise that the rat#2 strategy is actually optimal.

It does seem like my natural instinct is to feel like rat#1, and I have to consciously override to act like rat#2 - usually by forward-planing when I'm feeling stronger (well rested, well fed etc).

comment by Jess_Riedel · 2009-08-10T16:03:14.068Z · LW(p) · GW(p)

This is one place where Caplan seems to go off the deep end. I think it illustrates what happens if you take the Cynic's view to the logical conclusion. In his "gun to the head" analogy, Caplan suggests that OCD isn't really a disease! After all, if we put a gun to the head of someone doing (say) repetitive hand washing, we could convince them to stop. Instead, Caplan thinks it's better to just say that the person just really likes doing those repetitive behaviors.

As one commenter points out, this is equivalent to saying a person with a broken foot isn't really injured because they could walk up a flight of stairs if we put a gun to their head. They just prefer to not walk up the stairs.

It is an incredibly simplistic technique to reduce the brain to a single, unified organ, and determine the "true" desires by revealed preferences. Minds are much more complex and conflicted than that. Whatever people mean by "myself", it is surely not just the combined output of their brain.

Replies from: SilasBarta
comment by SilasBarta · 2009-08-10T22:05:35.935Z · LW(p) · GW(p)

I agree with your point here -- strongly. But I also think you're being unfair to Caplan. While his position is (I now realize) ridiculous, the example you gave is not.

In his "gun to the head" analogy, Caplan suggests that OCD isn't really a disease! After all, if we put a gun to the head of someone doing (say) repetitive hand washing, we could convince them to stop. Instead, Caplan thinks it's better to just say that the person just really likes doing those repetitive behaviors.

His position would not be that they like doing those behaviors per se, but rather, they have a very strange preference that makes those behaviors seem optimal. Caplan would probably call it "a preference for an unusually high level of certainty about something". For example, someone with OCD needs to perceive 1 million:1 odds that they're hands are now clean, while normal people need only 100:1 odds.

So the preference is for cleanliness-certainty, not the act of hand-washing. To get that higher level of certainty requires that they wash their hands much more often.

Likewise, an OCD victim who has to lock their door 10 times before leaving has an unusually high preference for "certainty that the door is locked", not for locking doors.

Again, I don't agree with this position, but it's handling of OCD isn't that stupid.

Replies from: Aurini, Psychohistorian, Jess_Riedel, pjeby
comment by Aurini · 2009-08-13T21:10:57.561Z · LW(p) · GW(p)

I used to have a mild case of OCD.

Let's say I cracked my first knuckle. Well, of course I'm going to crack the other three to balance things out. But then I mess up - my ring finger is only 70% cracked. I can feel it in the joint, a sort of localized anxiety (sort of like an itch, or a joint that needs stretching, but it's a purely psychological irritation). Obviously I can't 30% crack my knuckle - that's no different than moving the finger. So I have to over crack it, up to 130%, and then follow up with the other three fingers.

But now I"ve hit a problem - I've cracked each finger twice, that's not a good number. Things feel worse than they did before the crack. I'd better square things out, so that each finger has been cracked four times - that's a good number. But now my right hand is bothering me, so to even it out I crack each finger there four times. And... oh, what the hell. We'll crack each finger sixteen times. That's 2^8 * 2^4 - gorgeous. I mean, just look at that notation! How much prettier could you want it to be?

Everything's fine until next time I need to crack something... shudder

I eventually forced myself to stop doing this during my last years of High School. It was ridiculous, and I knew it, but each time I encountered it I'd have that itch. But I forced myself to ignore it, the same way you can't scratch your nose while you're on stage, and eventually I broke the habit. But even thinking about it now makes me want to do something that's exponentially symmetrical...

NO! NO, I WON'T GIVE INTO YOU OCD!

;)

comment by Psychohistorian · 2009-08-10T22:13:05.249Z · LW(p) · GW(p)

Why is it specifically locking up ten times (it would actually need to be an odd number)? Someone with OCD will do things in, say, multiples of three. Under the certainty view, 14 times would be better than 12, but someone who needs to work in multiples of three would not find it preferable.

Replies from: SilasBarta
comment by SilasBarta · 2009-08-10T22:55:54.767Z · LW(p) · GW(p)

Because their utility function (as believed by Caplan) is not strictly increasing in the number of times you do it. Utility goes up as you approach the desired level of certainty (such as 1,000,000:1 odds), then you hit diminishing returns.

Just like how you like clean hands, but will be okay with only being 99% sure they're clean enough.

Replies from: Psychohistorian
comment by Psychohistorian · 2009-08-11T05:35:59.811Z · LW(p) · GW(p)

As I understand it, a person (or at least some people) with OCD will need to, say, lock a door precisely nine times. If he somehow locked it 11 times, he'd be very distressed. It's like I'm happy when I'm 99.9% sure my hands are clean, but miserable when I'm 99.99% or 99 % sure they're clean. It doesn't make sense. That's not a preference for cleaner hands, or more locked doors. It's, gasp, crazy.

Not to mention some of this behaviour is binary, like locking doors or turning on lights. No matter how many times I flip a switch or turn a lock, if it's odd, it'll be locked/on, and if it's even, it'll be unlocked/off. I just don't think most OCD behaviour actually follows patterns that "additional certainty" would predict.

Replies from: tim, SilasBarta
comment by tim · 2009-08-12T13:11:19.006Z · LW(p) · GW(p)

i have some experience with OCD and i think a good way of defining it would be: people with OCD repeat their compulsive rituals as a form of negative reinforcement. when a ritual is interrupted or unable to be completed in some way, the person will usually suffer a tremendous amount of anxiety. this anxiety is relieved upon completion of the ritual making it a strong source of negative reinforcement and causing that person to repeat it in the future. while the initial [i]basis[/i] of the ritual is "crazy" or irrational (obviously locking a door nine times serves no practical purpose in of itself), the [i]use[/i] of the ritual is not - it removes or prevents anxiety.

comment by SilasBarta · 2009-08-11T06:00:57.482Z · LW(p) · GW(p)

Hey, take it up with pjeby if you think you understand the issue better, it's well beyond my pay grade at this point. He linked to a peer-reviewed paper substantiating the certainty thesis.

You should at least consider the possibility that OCDers really do just need to be more sure, and the number-based rituals are simply the result of them having noticed that that number comforted them in the past, and then cargo-cultishly inferring that the number is somehow special.

comment by Jess_Riedel · 2009-08-11T06:03:37.199Z · LW(p) · GW(p)

I still think that Caplan's position is dumb. It's not so much a question of whether his explanation fits the data (although I think Psychohistorian has shown that in this case it does not), it's that it's just plain weird to characterize the obsessive behavior done by people with OCD as a "preference". I mean, suppose that you were able to modify the explanation you've offered (that OCD people just have high preferences for certainty) in a way that escapes Psychohistorian's criticism. Suppose, for instance, you simply say "OCD people just have a strong desire for things happening a prime number of times". This would still be silly! OCD people clearly have a minor defect in their brains, and redefining "preference" won't change this.

Ultimately, this might just be a matter of semantics. Caplan may be using "preference" to mean "a contrived utility function which happens to fit the data", which can always be done so long as the behavior isn't contradictory. But this really isn't helpful. After all, I can say that the willow's "preference" is to lean in the direction of the wind and this will correctly describe the Willow's behavior. But calling it a preference is silly.

Thanks for the comment. This discussion has helped to clarify my thinking.

comment by pjeby · 2009-08-10T22:08:33.462Z · LW(p) · GW(p)

I don't agree with this position

Why not?

Replies from: SilasBarta
comment by SilasBarta · 2009-08-10T22:58:44.885Z · LW(p) · GW(p)

Because I think people with OCD do have, contra Caplan, a compulsion to do those specific acts, not a compulsion to be 99.99999% sure of certain things. Wanting that much certainty in such a narrow area is a very unlikely state, and if it were just about certainty, they would come up with different ways to achieve that certainty, not just do the same thing over and over.

Replies from: Shae, Alicorn, pjeby
comment by Shae · 2009-08-11T12:35:19.346Z · LW(p) · GW(p)

"Because I think people with OCD do have, contra Caplan, a compulsion to do those specific acts, not a compulsion to be 99.99999% sure of certain things. "

Person with OCD here, reporting late to the party (I'm always behind in my reading).

SilasBarta, you are correct.

It must be remembered that sometimes what OCD people do is not check the lock nine times, but touch the red dish every time we go out the back door. Sometimes we have a nagging doubt that our mom will die if we don't (magical thinking). This isn't to be read as a preference for being more sure that mom won't die, since we know damn well that if she does it won't be because we didn't touch the dish. It's, as someone said, crazy.

The compulsion and the attempt to satisfy it are uncomfortable.

Replies from: pjeby
comment by pjeby · 2009-08-12T17:50:11.614Z · LW(p) · GW(p)

Sometimes we have a nagging doubt that our mom will die if we don't (magical thinking). This isn't to be read as a preference for being more sure that mom won't die, since we know damn well that if she does it won't be because we didn't touch the dish.

What you're describing isn't an OCD symptom; it's just a garden-variety irrational belief. The fact that you "know" something is false doesn't stop you from behaving as if it's true -- see the previous examples here about haunted houses and serial killers.

(To be clear: I don't mean the entire combination of behaviors isn't OCD; I just mean the part where you act on a belief you "know" to be untrue. That part, everybody has.)

comment by Alicorn · 2009-08-10T23:10:40.721Z · LW(p) · GW(p)

Has anybody ever tried installing a little camera above a stove with a live feed (checkable by cell phone) to see if that helped people panicking about having left it on?

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-08-11T00:30:36.841Z · LW(p) · GW(p)

I sometimes control my OCD by crossing my fingers in a certain very odd pattern whenever I do something (like locking a door). Then, as soon as I come to a notebook or a computer with my fingers still crossed, I write down "I locked the door at 5:27 PM, August 10, 2009". When I've done this, I can just look at the paper and my compulsion to check whether or not the door is really locked mostly goes away.

I used to try the same thing without the finger crossing, and I found that I was always able to believe there was just the tiniest chance that I might have formed the false memory of having locked the door between locking it and reaching the notepad. Because I don't cross my fingers except while in the act of locking the door and I don't write a note unless my fingers are crossed, I can dispel that last nagging doubt. From a rational point of view it's not very sensible, but it seems to work okay.

Replies from: taryneast, christopherj
comment by taryneast · 2011-07-26T16:08:24.883Z · LW(p) · GW(p)

Yes - it's definitely good to have a reliable source of information about these things. Like having the days-of-the-week written on certain everyday medications. You can just go look and know whether or not you've taken today's dose. On the occasions where I've had to take medicine that doesn't have these (eg a course of antibiotics) I've had that "nagging sense" too -> not knowing means you could either skip a dose, or double-dose. So writing the days of the week on the plastic next to each pill helps with that. Having something that you can check easily and that you trust helps reduce the anxiety a lot.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-07-26T19:12:59.381Z · LW(p) · GW(p)

(nods) Absolutely.

This was particularly a problem for me after my stroke, because the brain damage made my memory unusually unreliable. Eventually I put a sheet of paper up by my pills and checked off each day after I took them. (Actually, on bad days, I would sometimes lose track between the first bottle and the third of which pills I'd already taken, so I established the habit of moving each one from left to right after I took it.)

Replies from: None
comment by [deleted] · 2014-06-13T06:11:33.994Z · LW(p) · GW(p)

Repeated checking CAUSES memory distrust.

In obsessive-compulsive disorder (OCD) checkers distrust in memory persists despite extensive checking.

Source: http://www.ncbi.nlm.nih.gov/pubmed/12600401

comment by christopherj · 2014-01-25T06:41:13.925Z · LW(p) · GW(p)

...I was always able to believe there was just the tiniest chance that I might have formed the false memory... ... From a rational point of view it's not very sensible, but it seems to work okay.

A healthy skepticism about the reliability of memory is actually very rational (the subject of the worry is a different story). I'm rather good at remembering facts, but I'm positively terrible at remembering things like "when I leave I must take this item that I usually don't". I've tried constantly reminding myself, worrying that I'll forget, and then I occasionally forget anyways. But this sort of memory is very easy to outsource, eg I can misplace my keys, shoes, or glasses on something and then there's no way I'm forgetting it when I leave, nor do I have to worry. I've occasionally forgotten the basement lights on (kick door closed while carrying a basket of laundry, wouldn't be noticed until next visit to basement or outside at night), now I never close the basement door until I turn off the lights. My street has alternate parking, and I set an alarm so I won't forget.

If I worried excessively about the state of my door, I'd keep a binary toggle on my keychain, and toggle it when I locked/unlocked the door. Dunno if that would work for OCD, I'm only absent-minded. But locking the door would be a habit with a reliable trigger. I can fight any specific case of absentmindedness easily with a habit and reliable trigger.

comment by pjeby · 2009-08-11T02:09:19.464Z · LW(p) · GW(p)

Unless you have OCD yourself, I'm not sure your opinion on that counts. You should probably ask Yvain... who appears to indeed be controlling certainty, not being compelled to engage in specific behaviors.

Or of course, you could always consult some of the research...

Replies from: SilasBarta
comment by SilasBarta · 2009-08-11T03:38:08.402Z · LW(p) · GW(p)

Okay, fair point. Still, you gotta give me props for noticing the right hypothesis[1] based on such little data -- that's half the battle, remember! -- even if I did subsequently reject it because OCDers are so predictably bad at identifying the regularity behind their compulsions.

Arguably, I have a low-grade form of OCD myself. I always have to check the back end of my car when I park it in my garage to make sure the garage door won't close on it, even though I've used the same garage and roughly similar cars for the last four years, and have a wooden block to mark when it's in far enough.

But unlike cargo cult OCDers, I don't find some kind of magic number that satisfies me. Sometimes I just say "to hell with it" and don't check. Sometimes I go back twice to check. Usually, just once. But I always recognize that I'm doing it to make sure my car is in far enough, and so I can identify ways of making myself not "have to" check, if I ever thought it was worth the effort, or found my ritual too bizarre. I can put a mirror or camera in, for example.

In fact, the reason I never considered my habit OCD until now was because it isn't accompanied by a hard-headed focus on a specific act, as opposed to a specific level of certainty.

[1] in a devil's advocate defense of Caplan

comment by pjeby · 2009-08-09T02:36:22.791Z · LW(p) · GW(p)

In both cases, the conscious mind determines the signal and the unconscious mind determines the action.

Look more closely. All preferences are equal, in the sense of being within the same system -- and this includes signaling preferences. The drunk prefers to drink and prefers to not be thought of as preferring that. But these are not concepts of a different nature; they can both be expressed within the same behavioral preference system.

IOW, both the Cynical and Naive theories are wrong; we only have one set of preferences, it just sometimes works out that the "best" compromise (in the sense of being an approach that your brain can discover through trial and error) is to say one thing and do another. But both the saying and doing are behaviors of the same type; "conscious" vs. "unconscious" is a red herring here.

Now, if you want to say that you don't consciously identify with some subset of your choices or preferences, that's fine, but it's not useful to claim that this is the result of some schism in your being. It's all you, you just aren't being conscious of that part of "you" at the moment.

The "unconscious mind" isn't a real entity; it's not a "mind", in the anthropomorphic sense. It's just whatever you're not paying attention to right now, that keeps on going. If you pay attention to your breathing or your heart rate you can learn to control them. If you pay attention to your feet you'll know where they are right now. And if you pay attention to what you actually get from your "akrasic" behavior, you'll realize it's something you genuinely want.

What you haven't been doing, is negotiating fairly among all your "interests" (to use Ainslie's jargon), or cleanly prioritizing your "controlled variables" (to use Powers's).

Replies from: MichaelVassar, Yvain, PhilGoetz
comment by MichaelVassar · 2009-08-09T04:35:42.867Z · LW(p) · GW(p)

This is the clearest statement of your philosophy that I have seen yet PJ, and I HEARTILY agree with what I see here.

comment by Scott Alexander (Yvain) · 2009-08-10T03:51:05.044Z · LW(p) · GW(p)

The "unconscious mind" isn't a real entity; it's not a "mind", in the anthropomorphic sense. It's just whatever you're not paying attention to right now, that keeps on going. If you pay attention to your breathing or your heart rate you can learn to control them. If you pay attention to your feet you'll know where they are right now. And if you pay attention to what you actually get from your "akrasic" behavior, you'll realize it's something you genuinely want.

Patri Friedman once wrote something about "wanting" versus "wanting to want". I agree that everything you do, you genuinely want to do, in the sense that you're not doing it under duress. But not everything you do is something you want to want to do.

Likewise, if I imagine myself suddenly getting infinite willpower, there are certain things I would do more and other things I would do less.

I'm using the word "conscious" to refer to things I want to want and things I would do more with infinite willpower. I'm using the word "unconscious" to refer to things I don't want to want and things I would do less with infinite willpower. I don't think it's too controversial that those are two different categories.

Replies from: pjeby, Jess_Riedel
comment by pjeby · 2009-08-10T11:39:30.426Z · LW(p) · GW(p)

I'm using the word "conscious" to refer to things I want to want and things I would do more with infinite willpower. I'm using the word "unconscious" to refer to things I don't want to want and things I would do less with infinite willpower. I don't think it's too controversial that those are two different categories.

But they're not natural categories. The problem is that "consciousness" tends to focus on behaviors rather than the goals of those behaviors... as will be obvious to you if you've ever been a programmer trying to get people to give you actual requirements instead of just feature specifications. ;-)

So, it can be quite factually the case that you want not to do certain things, while also wanting (implicitly) some part of the result of those actions.

The problem is that protesting you don't want the action is not helpful. Our preferences are most visible in the breach, because consciousness is effectively an error handler. So your attention is drawn to the errors caused by the behavior, rather than to the goal of the behavior. Your brain wants you to just fix the error, and leave the working part of the system (from its point of view) alone.

But in order to fix the errors intelligently, you need to understand a bigger part of the system than just the location where the error is occurring. Specifically, you need to understand the requirements that are actually being met by the behavior, so that you can find other ways to implement those requirements.

What's more, I can guarantee you that when you find out those requirements, they will ultimately be something that you either do want, or did want at some time in the past, even if on reflection they are no longer relevant. Calling them a product of the unconscious mind is a factual error, as well as misleading: it implies they came out of nowhere and there's nothing you can do about them, when in actual fact they are (part of) your true preferences, and you can choose to pay attention and find out what they are, as well as searching for better ways to get them met.

comment by Jess_Riedel · 2009-08-10T16:06:22.187Z · LW(p) · GW(p)

I agree that everything you do, you genuinely want to do, in the sense that you're not doing it under duress.

I really think this is a bad way to think about it. Please see my comment elsewhere on this page.

EDIT: Unless of course you just define "genuinely wanting to do something" as anything one does while not under duress. But in that case, what counts as duress?

comment by PhilGoetz · 2009-08-09T15:04:10.677Z · LW(p) · GW(p)

IOW, both the Cynical and Naive theories are wrong; we only have one set of preferences, it just sometimes works out that the "best" compromise (in the sense of being an approach that your brain can discover through trial and error) is to say one thing and do another. But both the saying and doing are behaviors of the same type; "conscious" vs. "unconscious" is a red herring here.

You're agreeing with Yvain, not disagreeing with him. He is saying that the cynical and naive theories are different ways of looking at the same thing; and so are you. The only difference is that he isn't suggesting we jettison the ideas of "conscious" and "unconscious", and you are.

Generally, if you decide to declare that a principle embraced by several generations of scientists is rot, you should provide some evidence.

Replies from: pjeby
comment by pjeby · 2009-08-10T03:16:42.710Z · LW(p) · GW(p)

The only difference is that he isn't suggesting we jettison the ideas of "conscious" and "unconscious", and you are. Generally, if you decide to declare that a principle embraced by several generations of scientists is rot, you should provide some evidence.

Generally, if you decide to pick a point of debate like that, you should try doing some reading first. May I suggest http://en.wikipedia.org/wiki/Unconscious_mind for starters? In particular, the sections on "Controversy" and the notes that mention the current parlance for other-than-conscious mental processes is usually "non-conscious" rather than "unconscious"... precisely because cognitive scientists have found the now-cultural notion of an "unconscious mind" to be misleading about the nature of our non-conscious processes.

And rationalists in particular should be wary of the phrase, because it basically amounts to a stop-sign for actually thinking or investigating anything. I'm reminded of an exchange between a student and Richard Bandler, where the student (a psychiatrist, I believe) asked if something that the demonstration subject was doing was "turning it over to an unconscious process", and Bandler replied with something to the effect of, "That's not actually an explanation, you know. Everything is unconscious until you pay attention to it."

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-08-10T04:43:22.297Z · LW(p) · GW(p)

But there are some things on which the attention can easily be focused at will (like the name of your second grade teacher, when you're not thinking about them) and other things upon which the attention can never be focused, or only with great training (like the regulation of body temperature).

And there are some things which it seems like you can change at will (like whether or not you go out to dinner tonight) and other things which it seems you cannot change without great difficulty (like whether you freeze up and "choke" when speaking to large groups of people)?

Aren't priming, response to the IAT, self-handicapping and a slew of other mental phenomena done on a level that cannot be accessed no matter how hard you try to access it?

So what's wrong with going ahead and calling all these things you're not conscious of and cannot choose to focus attention on "unconscious"?

Replies from: pjeby
comment by pjeby · 2009-08-10T11:49:11.728Z · LW(p) · GW(p)

So what's wrong with going ahead and calling all these things you're not conscious of and cannot choose to focus attention on "unconscious"?

Why don't you ask the scientists who've chosen to start using "other than conscious" and "non-conscious"? I imagine their insights would be useful. ;-)

My personal reason, though, is that the term "unconscious" implies a unity and coherence to these phenomena that does not exist, and is easily over-extended to a fallacy of grey -- an excuse not to dig, a "stop sign" for thinking about your preferences andpaying attention to your mental processes.

And I particularly dislike the notion of an unconscious "mind" because it primes all sorts of misleading anthropomorphic projections of intention, purpose, and independent behavior, as well as unknowableness (after all, how can you ever really know what's in a "mind" other than your own?).

Replies from: gworley, taryneast
comment by Gordon Seidoh Worley (gworley) · 2009-08-10T18:30:27.502Z · LW(p) · GW(p)

So, if I understand this part of the thread correctly, pjeby is arguing that Yvain made a poor word choice that confused a straw man.

comment by taryneast · 2011-07-26T15:55:43.680Z · LW(p) · GW(p)

I personally prefer the term "subconscious" for these situations. It gives the impression that a subconscious process is one that is right there, swimming beneath the surface - leaving it able to be accessed by the conscious mind with a greater or lesser degree of ease... while still being a word that people recognise and perhaps don't have as many incorrect cached thoughts for.

non-conscious sounds like something you are when you've been knocked unconscious... :)

comment by RobinHanson · 2009-08-09T02:30:54.349Z · LW(p) · GW(p)

You misremember my position. I have not argued that the unconscious is right and the conscious wrong. I have argued (e.g., here) for trying to find a compromise to make peace between the conflicting parts of yourself. At the last OB/LW meetup I argued this point in person to several people, all of whom instead favored vigilant internal war.

Replies from: MichaelVassar, Yvain
comment by MichaelVassar · 2009-08-09T04:19:10.902Z · LW(p) · GW(p)

I generally advocate internal peace Robin, but oddly we haven't discussed this. Maybe a phone call some time when we both have a minute? You should probably text first ideally.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2009-08-10T18:59:17.565Z · LW(p) · GW(p)

This comment reads like an allegorical prescription for how one ought to resolve conflicts within oneself ;).

comment by Scott Alexander (Yvain) · 2009-08-10T04:32:26.260Z · LW(p) · GW(p)

Sorry. Corrected.

comment by Douglas_Knight · 2009-08-09T04:34:50.904Z · LW(p) · GW(p)

It is a classic finding in evolutionary psychology: the person who wants to fool others begins by fooling themselves.

I object to two points here: (1) calling it a finding and (2) calling it evolutionary psychology. It certainly is popular in evolutionary psychology, but I don't see any argument (certainly not in that link) that it is selected over generations rather than learned over a lifetime. More importantly, it's a hypothesis, not a "finding." There's very little evidence, largely because it's difficult to test. I also doubt it's specific enough to test.

Differences between the conscious and unconscious mind should usually correspond to differences between the goals of a person and the "goals" of the genome, or else between subgoals important today and subgoals important in the EEA.

That's evolutionary psychology and it's rather at odds at the previous claim! (The first part might be a way of making the original claim more specific, but it's rather different from what people usually say.)

Replies from: Psychohistorian
comment by Psychohistorian · 2009-08-10T17:06:20.572Z · LW(p) · GW(p)

the person who wants to fool others begins by fooling themselves.

Whatever it may be a finding of, it appears true and it makes great sense. We (or most of us) experience some discomfort in lying or misleading that feels completely different from telling the truth. We also notice such discomfort in people, and when we do, we tend to suspect them of lying. We aren't always right, but it still draws our suspicion.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2009-08-11T06:35:33.272Z · LW(p) · GW(p)

the person who wants to fool others begins by fooling themselves

Maybe it's true, but I'd like to see evidence. Introspection is evidence. But you only claim to conclude from introspection that lying has costs, both internal psychic costs and leakage. That's very far from claiming that people avoid those costs by fooling themselves. Which is independent of the claim that it's the best way to do it, which is an evolution-ish claim from the link.

Also, I have no idea what "lie" means in this context. I'm pretty sure from other exchanges on this site that it does not lead to much successful communication. Maybe the general population, or even psychologists, can use the word uniformly clearly. But do they? I did once read a fairly specific EEA just-so story about this where it seemed to be purely about projecting confidence / arete and pretty far from the kind of declarative statement that I associate with lying. "I am the greatest" seems precise enough to be false, but I suspect that appearance is due to my speaking "nerd" rather than English. I don't think most politicians have specific enough beliefs that they have to change them to say that. (But boxers, yes.)

comment by RobinHanson · 2009-08-09T02:26:46.046Z · LW(p) · GW(p)

Conscious minds are potentially rational, informed by morality, and qualia-laden. Unconscious minds aren't.

Your entire argument for preferring conscious over unconscious minds is this last quick throw away sentence? That's it? Come on, why can't unconscious minds be rational, informed by morality, or qualia-laden? And why are those the features that matter? Are you really implying this is so completely obvious that this one quick sentence is all that needs to be said? Declaring conscious goals to be the goals of the "person", versus unconscious goals as goals of the genome, just presupposes your answer.

Replies from: Yvain, Vladimir_Nesov, JulianMorrison, teageegeepea
comment by Scott Alexander (Yvain) · 2009-08-10T03:29:09.453Z · LW(p) · GW(p)

I guess I did consider it that completely obvious. If it's causing so much controversy, maybe I need to think about it more.

I'm defining my "conscious self" as the part of my mind that creates my verbal stream of thought and which controls what I believe I would do if I had infinite willpower. I'm defining "unconscious self" as the source of my inability to always go through with my conscious mind's desires.

By definition, my unconscious mind has no qualia / experiences / awareness, because if it did it would be part of my conscious mind (I suppose it's possible that it is a "different person" who has experiences that are not my experiences, but I have never heard anyone propose this before and don't know of any evidence for it.)

When I use the word "I", I refer to the locus of my qualia and experiences, and thus to my conscious mind. I have no selfish reason to care about my unconscious mind, because its state as happy or unhappy has no relationship to my state as happy or unhappy except insofar as the unconscious mind can influence the conscious mind. And I have no moral reason to care about my unconscious mind, because in my moral system only aware beings deserve moral consideration; the unconscious mind has no more awareness than a rock and deserves no more moral consideration than a rock does.

Along with my qualia, I identify with my rationality. My rationality is what tells me that there's very probably no such thing as ghosts. This satisfies my conscious mind, which then accepts that there's no such thing as ghosts. It does not satisfy my unconscious mind, which continues to make me flee haunted mansions or sleep with the lights on or something. My rationality is what tells me that I should ask that girl out because the worst she could do is say no. My conscious mind accepts that. My unconscious mind continues to use all of its resources to hold me back from asking.

It seems vanishingly unlikely that my unconscious actually has as supergoals "Flee haunted mansions" and "Never ask girls out" and is rationally achieving them. It seems much more likely that the unconscious is enacting genetic directives like "Avoid danger" and "Avoid taking risks that could lower your social status", but is too irrational to realize that although equivalents of these situations might have been problems in the EEA, they are no longer problems today. It thinks that "Flee haunted mansions" and "Never ask girls out" are appropriate subgoals of the supergoals "Avoid danger" and "Avoid taking risks that could lower your social status", but in fact they aren't. Since it's too dumb to realize this, I feel suitably superior to it to ignore its opinions.

The same is true of morality. My unconscious is what tells me to value the life of a photogenic American more than the life of a starving Ethiopian, to value the life of one specific person more than the life of fifty statistical people, to refuse to push the fat man onto the tracks in the trolley problem no matter how many lives it would save, et cetera. If another person had this morality, I wouldn't respect it in them, and if my own unconscious has this morality, I don't respect it in it either.

Let me also admit that I have a bias here. I've got obsessive-compulsive disorder. It means that my unconscious mind frequently tells me things like "Close that door there eighty two times, or I will throw a fit and not let you feel comfortable for the rest of the day." I know that feeling is caused by miswired circuits in the basal ganglia. Why should I give miswired circuits in the basal ganglia the same respect as I give myself, a full intelligent human being?

All of my other unconscious urges seem closer to that urge to close the door eighty-two times than they do to anything rational or worth respecting.

Replies from: pjeby, RobinHanson
comment by pjeby · 2009-08-10T15:56:39.546Z · LW(p) · GW(p)

Since it's too dumb to realize this, I feel suitably superior to it to ignore its opinions.

Which is why you then experience akrasia. Or, if I was going to anthropomorphize(?), I'd say, "which is why it feels entitled to ignore your opinions right back". ;-)

See, "your" opinions don't count for all that much in what you actually do. If you want to change your behavior, it's your "unconscious" opinions that you need to change. But you won't change them without first being aware of them, and if you keep the attitude you have, you'll have no real inclination to pay attention to them or seriously consider them when designing for your requirements... thereby ensuring that your unconscious mind will be stuck with low-quality ways of getting those requirements met!

In other words, the reason your unconscious desires have such poor quality of thought-throughness and execution is precisely because you refuse to consciously participate in the process.

comment by RobinHanson · 2009-08-10T15:22:19.037Z · LW(p) · GW(p)

The words "conscious" and "unconscious" are widely used; I don't think it helps for you to make up your own definitions. Your evidence about the rationality and morality of your conscious mind come many from your personal conscious beliefs about those features within yourself; these could easily be biased and self-serving. I'm still not entirely clear on what qualia mean, but from what I do understand about them I don't see why the parts of your mind other than the part I'm talking to couldn't have them.

Replies from: Yvain, UnholySmoke
comment by Scott Alexander (Yvain) · 2009-08-10T19:14:47.890Z · LW(p) · GW(p)

I thought I was listing the standard definition of "conscious" and "unconscious" in a way that made it clearer why they led to my conclusion. If you have a different definition, what is it?

My beliefs about the rationality and morality of my unconscious mind do come from my conscious mind, this is where all my beliefs come from. When I say that a creationist is less rational than I am, or a Nazi is less moral than I am, I'm using those same beliefs from my conscious mind, and they are subject to the same biases. I have to either forfeit my right to judge entirely, or use those same judgments to judge my unconscious. I've learned (partly from you) ways to try to be less biased, and I try to use them here, but in the end all I have is reflective coherence

Just to clarify your position, are you suggesting I have a moral duty to respect my unconscious mind's preferences, in the same way I would have to respect the preferences of another person? Or are you suggesting it would benefit my conscious mind to have inner peace?

Replies from: RobinHanson, christopherj, timtyler
comment by RobinHanson · 2009-08-10T19:52:35.883Z · LW(p) · GW(p)

So to summarize, you think your conscious mind is more rational, moral, and qualia-full than your unconscious mind, and the evidence you cite for these conclusions is: your conscious mind has these opinions. Have I got this right? Any idea what opinions your unconscious mind has on these matters?

Replies from: Yvain, arundelo
comment by Scott Alexander (Yvain) · 2009-08-10T20:17:59.041Z · LW(p) · GW(p)

You can phrase any argument that way:

"You think you're more likely to be correct about evolution than a creationist, and the evidence you cite for this conclusion is that this is your opinion."

Yes, it's my opinion. But it's my opinion for reasons. I thought I gave some good reasons why I thought my conscious mind makes better decisions than my unconscious. You rejected those because it was my conscious mind giving them. But if that were sufficient criteria to reject reasons, you would have to reject everyone's reasons on any subject, from evolution to the roundness of the Earth.

Even aside from all those reasons, I haven't heard any reasons to think the unconscious is aware, rational, or morally reflective, and I think the burden of proof is on that position, since it would basically be saying there's a second person inside my head.

As for what opinion my unconscious mind has, yes, I have some idea. I predict it has no opinions at all, and is more of a collection of drives, instincts, and processes than the sort of entity that forms rational opinions on complicated issues. I doubt my unconscious "disagrees" with me about this any more than my kidneys disagree with my position on tax reform, or my tibia disagrees with my interpretation of quantum mechanics.

If I had multiple personality/dissociative identity disorder, I would be prepared to treat my alternate personalities as worthy of respect and cooperation. But I think my unconscious is probably more like a kidney or a tibia than like a whole other personality. I realize this is a factual claim, and am willing to change my mind if I hear evidence that suggests otherwise.

Replies from: RobinHanson
comment by RobinHanson · 2009-08-12T02:35:55.996Z · LW(p) · GW(p)

I didn't see you offering reasons - I just saw you declaring that in general the conscious is more rational and moral, this conclusion being so obvious it didn't need reasons. You later gave specific examples of beliefs in yourself where your conscious part thinks your conscious beliefs are more correct and more moral than your unconscious beliefs, but surely you can't expect that to be considered a sufficient argument about the general trend in all people on all topics.

I do think you could stand to read a bit more about the unconscious; I think you will find it far more complex and capable than you realize.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-08-12T04:23:30.308Z · LW(p) · GW(p)

Things I gave as evidence: the logical inconsistency of unconscious mind having conscious experience, irrationality of unconscious mind continuing to pursue subgoals when clearly no longer connected to supergoals, unconscious' vulnerability to proximity/scale biases when dealing with morality, and several others. I don't see how any of these can be dismissed as just "my conscious part thinks my conscious beliefs are more correct" with anything other than a Fully General Counterargument.

I've read plenty about the unconscious, and I admit it's astonishingly complex and capable. So are honeybees. But when bees perform unbelievably complicated tasks, I don't assume they therefore have human-level intelligence, and I think the unconscious' actions are more like the honeybees' than people's.

However, if there's something you think I should know more about, why not recommend me specific articles, authors, or books?

Replies from: SilasBarta, RobinHanson, teageegeepea, timtyler
comment by SilasBarta · 2009-08-12T04:30:06.739Z · LW(p) · GW(p)

irrationality of unconscious mind continuing to pursue subgoals when clearly no longer connected to supergoals, unconscious' vulnerability to proximity/scale biases when dealing with morality, and several others.

The conscious is guilty of these too.

I've read plenty about the unconscious, and I admit it's astonishingly complex and capable. So are honeybees. But when bees perform unbelievably complicated tasks, I don't assume they therefore have human-level intelligence, and I think the unconscious' actions are more like the honeybees' than people's.

Okay, but carrying the analogy over, I'm sure you also don't trivialize the value of honey!

However, if there's something you think I should know more about, why not recommend me specific articles, authors, or books?

You could start with making yourself aware of the non-conscious mind's ability to solve CAPTCHAs, an AI-complete problem, and current conscious minds' inability to figure out how they do it with enough clarity to re-implement it in software.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-08-17T16:57:59.426Z · LW(p) · GW(p)

Actually, it's funny you mention CAPTCHAs as your example. If you're going to go that far, why not also attribute skill at chess to the unconscious? After all, it's got to be the unconscious that screens out most of the several dozen possible chess moves each turn and lets your conscious concentrate on the few best, and you can generalize from chess to practically all matters of strategy. Or for that matter, how about language? All my knowledge of English grammar was purely unconscious until I started studying the subject in high school, and 99% of my grammar use still comes from there.

So the issue's not whether it can perform complex tasks. I don't know exactly what the issue is, but I think it connects to the concept of "personhood" somehow. I question whether the unconscious is more than a collection of very sophisticated mental modules, in the same way that a bird's brain may have a flight dynamics module, an astronomical navigation module, a mate-preference-analysis module, and so on.

The computing hardware of my brain contains a program for recognizing letters, a program that detects potential mates and responds with feelings of lust, a program that interacts with my reward system in such a way as to potentially create alcoholism, and so on. They're all computationally very impressive. But I don't see why I should assign them moral status any more than I would feel morally obligated to listen to a laptop on which I had installed a program that detected the presence of beautiful women nearby and then displayed the words "mate with this woman". I don't want to privilege these programs just because they happen to be located inside a human brain and they get reflected glory from some of the other things human brains can do.

To make me want to assign them moral status, you'd have to give me evidence that there was something that it felt like to be my lust. This seems kind of category-error-ish to me. I feel my lust, but my lust itself doesn't feel anything. You may feel sorry for me for having to deal with my lust, but feeling sorry for my lust because I don't choose to satisfy it is in my opinion a waste of sorrow. It's also an infinite regress. If I feel unhappy because I have unfulfilled desire, and my desire feels unhappy because it's unfulfilled, does my desire's unhappiness feel something? Why stop there?

I have a feeling this problem requires more rigor than I can throw at it right now. I've been trying to think about it more clearly so as to hopefully eventually get some top-level posts out of it, but this is the best I can do at the moment.

Replies from: Eliezer_Yudkowsky, SilasBarta, timtyler
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-17T18:11:25.298Z · LW(p) · GW(p)

I question whether the unconscious is more than a collection of very sophisticated mental modules

So's your conscious. The unconscious just isn't connected up the right way for deliberation and reflectivity.

(IAWYC)

comment by SilasBarta · 2009-08-19T15:34:28.987Z · LW(p) · GW(p)

I'll bite the bullets in your first paragraph. So chess also relies on non-conscious skills. What trap did I just fall into?

I don't see why I should assign them moral status any more than I would feel morally obligated to listen to a laptop ...

There is a major difference between your unconscious mind and a laptop with the same output: specifically, the unconscious mind has a direct, seamless, high-bandwidth connection to your mind. When you recognize a face or a letter, you don't have to pass it to a laptop, look at the output, and read the output. From your conscious mind's perspective, you just get insta-recognition. This makes it more valuable that a laptop -- in all senses -- just as faster mental addition is better than a hand calculator that computes with the same speed.

If and when someone makes a machine that can do these tasks faster, and still interface seamlessly, in the unconscious's stead, then you will be justified in trivializing the latter's value. Just like you would feel less bad (though not completely indifferent) about the extinction of honeybees if honey could be more efficiently sythesized.

The only case where the above reasoning doens't apply is, as you point out, in values. Why is the unconscious mind's decision of values, er, valuable? Why are you morally bound to its decrees of lust? There answer is, I don't know. But at the same time, I don't know how you can clip out the lust while retaining "you" -- not given your existing brain's architecture. That is, I disagree that the brain is as modular as you seem to think, at least if that's what you meant by the use of "modules".

And remember, pure value judgments are only a small fraction of its outputs.

comment by timtyler · 2009-08-18T19:30:06.468Z · LW(p) · GW(p)

Re: I question whether the unconscious is more than a collection of very sophisticated mental modules, in the same way that a bird's brain may have a flight dynamics module, an astronomical navigation module, a mate-preference-analysis module, and so on.

...and what do you think your conscious mind is, then - if not a collection of sophisticated mental modules?

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-08-18T21:51:58.118Z · LW(p) · GW(p)

Wikipedia gives Fodor's list of eight characteristics of "mental modules", which include "domain specificity", "fast speed", "shallow output", "limited accessibility", "encapsulation", et cetera, and quotes someone else as saying the most important distinguishing feature is "cognitive impenetrability".

In other words, "module" has a special definition that doesn't mean exactly the same as "something in the mind". So when I "accuse" the unconscious of being "modules", all I'm saying is that it's a bunch of single-purpose unlinked programs, as opposed to the generic and unified programs that make up the conscious mind. This seems relevant since it makes it harder to accept the idea of the unconscious as a separate but equal person living inside your brain.

If there are other definitions of "module" that include anything in the mind, and you're using one of those, then yes, the conscious mind is a module or collection of modules as well.

Replies from: timtyler
comment by timtyler · 2009-08-19T08:07:36.839Z · LW(p) · GW(p)

The conscious mind is probably pretty modular too.

http://en.wikipedia.org/wiki/Society_of_Mind

In some respects, consciousness is largely a perceptual filter - the attention filter - whose role it is to block out most sensory inputs from most systems most of the time. From that perspective, the contents of consciousness primarily consist of the outputs of normally-unconscious modules. The bit of the mind that switches attention around might itself be relatively small - and gains the illusion of size by being able to illuminate many areas of the mind - by damping down perceptions from everywhere else.

Anyway, you might have a case that consciousness is somehow "less modular" than all the other parts of the mind.

This whole "identifying with consciousness" business is totally bizarre to me. I hate to come on with the self-help - but: consciousness is tiny! You are so much more than that! Please repeat to yourself 1,000 times - "I am not my conscious mind!" The idea that you are your consciousness is an illusion created by your ego - which thinks it is the most wonderful thing in the world - that everything revolves around it - and that it is you. If you get some perspective, you should be able to see what utter nonsense that is.

Replies from: pjeby, Yvain
comment by pjeby · 2009-08-19T14:19:40.316Z · LW(p) · GW(p)

The bit of the mind that switches attention around might itself be relatively small - and gains the illusion of size by being able to illuminate many areas of the mind - by damping down perceptions from everywhere else.

And the PCT hypothesis for why this is so (predating the Society of Mind by a decade or so), is that consciousness is effectively the debugger or test rig for the rest of the brain: a tool whose job is the tuning, adjustment, and extension of the brain's unconscious control systems. The conscious mind is heavily engaged in any sort of skill acquisition, "noticing" what perceptions are associated with success or failure, and this noticing process is key to wiring up new control circuits.

From this perspective, consciousness is effectively an on-call maintenance person, a tech support rep for the unconscious. Which provides a good evolutionary reason for "higher" animals to have higher degrees of consciousness; the more flexible the creature, the more advanced the tech support required. ;-)

That humans have decided to rebel and take over the company instead of functioning in a strictly support capacity is a separate issue.

And when the revolution isn't going so well, we call it "akrasia".

So the key to a smooth takeover is realizing that if the unconscious machinery isn't working well, then you will suffer right along with your unconscious. You need a win-win solution, and the unconscious is pretty easily satisfied, being just a big dumb array of thermostats and all.

An array which -- being that you're its tech support rep -- you can actually rewire. In fact, most of what's in there, you consciously put there at some point, or at least didn't object to.

But if you treat it like it's an independent mind -- which it isn't -- and an enemy (which it also isn't) whose demands should be disregarded, then you're never even going to perceive what is actually going on in there, and therefore won't be able to tell how to change any of it. And you'll just keep fighting, instead of debugging.

Not really a good use of your time, IMO.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-08-21T17:43:35.682Z · LW(p) · GW(p)

I think we agree. Your statement that the unconscious is "just a big dumb array of thermostats" is just what I was trying to get across, plus as you said that it isn't an independent mind.

I interpreted Robin (I'm still not sure if I'm right) as suggesting the unconscious is a full and separate mind whose preferences deserve respect for the same reason you'd respect another human's preferences. So that, for example, if you wanted to stay sober but your unconscious wanted to drink, you "owe" it to your unconscious to compromise, in the same way you'd be a bad friend if you didn't take a friend's preferences into account. All I am trying to say is that the unconscious doesn't deserve that kind of respect.

If you're saying that my conscious mind can achieve its own goals better by working with the unconscious in some particular way, well, you're the expert on that and I believe you.

Replies from: pjeby
comment by pjeby · 2009-08-22T01:53:01.304Z · LW(p) · GW(p)

So that, for example, if you wanted to stay sober but your unconscious wanted to drink, you "owe" it to your unconscious to compromise, in the same way you'd be a bad friend if you didn't take a friend's preferences into account. All I am trying to say is that the unconscious doesn't deserve that kind of respect.

If you're saying that my conscious mind can achieve its own goals better by working with the unconscious in some particular way,

Yes. The reason I argued with your notion that you shouldn't pay any attention to your unconscious goals is because, with relatively few exceptions, your unconscious goals are your goals.

Generally, they're either re goals you share with your unconscious (like staying alive), or goals you put in there, based on what you thought was useful or valuable at some point in your life. Once such goals are acquired, any action patterns that lead towards those goals tend to stick until better action patterns are learned, or the goal is consciously deactivated.

But it isn't enough to say, "I don't want X any more", when you don't actually know what, precisely, X is. That's why you actually do need to pay attention to your unconscious goals, so that you can either find alternative ways to satisfy them, or verify that in fact, you no longer require them to be satisfied on your behalf.

Think of it as a safety interlock of sorts, that allows you to maintain a sincere verbal belief and expression that you don't want X, while leaving the machinery in place to nonetheless acquire X without your conscious knowledge or consent.

To borrow the metaphor of the Sirens, your unconscious won't untie you from the mast until you stop fighting to get free. When you once more become the person who ordered yourself tied to the mast in the first place, then and ONLY then will your unconscious accept a reversal of your original orders.

That's why you need to pay attention to the goals, so you can step into the mental shoes of the "you" who put the goals in in the first place, and then either reconsider the original goal, or find a better way to get it that doesn't have side effects.

But unless you can actually acknowledge the desirability of the goal in question, your unconscious effectively assumes you're merely under social pressure to demonstrate your desire to adhere to the ways of the tribe, and ignores your attempt to give it "new orders".

comment by Scott Alexander (Yvain) · 2009-08-21T20:02:04.155Z · LW(p) · GW(p)

This whole "identifying with consciousness" business is totally bizarre to me. I hate to come on with the self-help - but: consciousness is tiny! You are so much more than that! Please repeat to yourself 1,000 times - "I am not my conscious mind!" The idea that you are your consciousness is an illusion created by your ego - which thinks it is the most wonderful thing in the world - that everything revolves around it - and that it is you. If you get some perspective, you should be able to see what utter nonsense that is.

Sounds like an outside the box box. So I have a job interview tomorrow morning and my conscious mind is telling me to go to sleep early, but my unconscious keeps me up worrying and watching TV until midnight. Should I respect the secret wisdom of the unconscious mind that my deluded ego-self is keeping me from understanding, or should I shut up and figure out some way to get to sleep?

I like Buddhism. I meditate and I'm very interested in exploring the depths of my unconscious mind and possibly at some point dissolving my ego and achieving full awareness, whatever the heck that means. But the "unconscious" referred to in the original post is what's telling the drunkard to get another shot of whiskey. I don't think the Buddha would approve of that particular manifestation of it any more than anyone else, and all I'm saying is that this drunkard is justified in being against this desire, rather than thinking that since it's their unconscious mind they have to accept it.

Replies from: timtyler
comment by timtyler · 2009-08-21T20:16:01.923Z · LW(p) · GW(p)

I feel like I already addressed such issues when I wrote: "We do not have to choose between these two theories." Sometimes the conscious goals are best, and sometimes the unconscious ones are. You have given some examples of the former, but there are also examples of the latter.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-08-21T21:54:39.790Z · LW(p) · GW(p)

Sorry, long time ago and different section of the comments. I think with that clarified I mostly agree with you anyway.

comment by RobinHanson · 2009-08-17T13:14:21.474Z · LW(p) · GW(p)

Just because your conscious mind isn't aware of experiences by your unconscious mind doesn't mean they don't exit. And just because your unconscious is subject to some biases doesn't mean your conscious mind does better on average.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-08-17T16:40:33.491Z · LW(p) · GW(p)

I don't disagree with any of that, but it's all phrased sort of as "you can't prove it doesn't." What should make me single out the hypothesis that it does as worthy of further consideration?

Replies from: RobinHanson
comment by RobinHanson · 2009-08-18T18:32:22.832Z · LW(p) · GW(p)

I don't know how else to say it: the things you point to as evidence supporting your claims just don't actually offer substantial support for those claims. To support claims about the relative features of two systems you need relative evidence; absolute evidence about one system just isn't very relevant.

comment by teageegeepea · 2009-08-12T04:52:20.319Z · LW(p) · GW(p)

Maybe individual honeybees aren't very intelligent, but hives are moreso. Hopefully Anonymous often makes a similar point about markets, corporations or other collective entities and suggests some might even be (or become) conscious. I don't really care much about consciousness, but viewed as persisting and replicating entities they might be lumped in with other life (just like multicellular and unicellular life are).

comment by timtyler · 2009-08-18T19:27:59.477Z · LW(p) · GW(p)

The conscious mind does some pretty stupid things too. Like becoming a catholic priest. That sort of thing consigns your potentially-immortal essence that's responsible for your very exisence to the trash bin.

If this is a battle to see which system is the more stupid, we could be looking at examples of insanity from both sides all day.

comment by arundelo · 2009-08-10T22:07:31.994Z · LW(p) · GW(p)

Any idea what opinions your unconscious mind has on these matters?

If it had any, do you think it would be incapable of letting us know about them? If so, why?

(Maybe Yvain should have a session with a Ouija board.)

comment by christopherj · 2014-01-25T07:38:22.532Z · LW(p) · GW(p)

My beliefs about the rationality and morality of my unconscious mind do come from my conscious mind, this is where all my beliefs come from.

I'd dispute this. Just as an example, it is your unconscious mind which provides the processing power for you to read this statement, to understand spoken words, etc. It is largely your unconscious mind which declared some words interesting and others boring. It is your unconscious mind which declares things pleasant (and therefore morally good if you value hedonism). It is your unconscious that contains mirror neurons, without which your morality might be rather different. It is your unconscious that remembers and forgets, though with repeated effort and a consequent use of willpower your conscious can demand a few specific facts be remembered. Your conscious mind may have devised your moral system, but where did the initial values you seek to maximize come from?

I have to either forfeit my right to judge entirely, or use those same judgments to judge my unconscious.

Does it do you any good to judge your unconscious? If you could accomplish more of your conscious goals if you had more willpower, perhaps you could accomplish more of your conscious goals if you found a way to spend less willpower fighting your unconscious.

comment by timtyler · 2009-08-18T19:21:34.815Z · LW(p) · GW(p)

Your definitions of "conscious" and "unconscious" seem highly irregular to me. Best to stick to the dictionary here - I figure.

comment by UnholySmoke · 2009-08-10T18:40:24.348Z · LW(p) · GW(p)

I know that feeling is caused by miswired circuits in the basal ganglia. Why should I give miswired circuits in the basal ganglia the same respect as I give myself, a full intelligent human being?

But your rational/conscious/whatever mind is also made of neurons, and yet that makes mistakes, confuses morality etc, and does things you know aren't right. Why does that get described as 'a full, intelligent human being' while your unconscious mind is just basal ganglia?

My rationality is what tells me that I should ask that girl out because the worst she could do is say no. My conscious mind accepts that. My unconscious mind continues to use all of its resources to hold me back from asking.

All true enough, but to go on and say that one is 'better' or 'righter' is not as trivial as you seem to assume. If your supergoal is 'get laid' then asking her out is the correct decision. If 'don't look silly' has more utilons in your head, then that's correct. If you want to argue that 1 'conscious' utilon is worth more than 1 'unconscious' utilon, then that's fine, but you'll have to demonstrate how and why.

Yvain, can you re-summarise your argument drawing better boundaries than 'conscious' and 'unconscious', and without things like 'intelligent, rational human being' to describe parts of your head that, to an unbiased observer, look a lot like other parts? If not, perhaps the conscious/unconscious boundary you're trying to draw is a false (though highly intuitive) one.

comment by Vladimir_Nesov · 2009-08-09T02:51:41.460Z · LW(p) · GW(p)

Clearly, argument for which of the two points of view is the right one wasn't the focus of the post, the problem statement was.

comment by JulianMorrison · 2009-08-09T23:22:43.648Z · LW(p) · GW(p)

Drawing conclusions from what little neuroscience I've managed to overhear, the conscious mind isn't even the part where you live most of the time. It fires up when you're paying it attention. Instantaneous experience ("qualia") exists outside it except the ephemeral experience of being conscious. Playback experience exists inside but references outside it. However it's the part where your human-level intents exist. If all the rest of the brain were copied to a sim, but not the conscious mind, I think you'd have no difficulty labeling that "not me". Miss a comparable sized chunk of your unconscious, and you might not even notice.

comment by teageegeepea · 2009-08-09T21:36:33.158Z · LW(p) · GW(p)

There's a series of posts on the foolishness of "qualia" here. I agree with it and share the low-regard for philosophy also found there (relevant for this post, that would also be low relative to cynical economics). I also think what pjeby said above makes sense. The common thread being to find little significant in what we call "consciousness", preventing it from holding privileged status.

Replies from: Psychohistorian
comment by Psychohistorian · 2009-08-10T17:10:53.259Z · LW(p) · GW(p)

"He sharply stubbed his toe on a large rock and proclaimed, 'Thus, I refute this!'"

Replies from: Annoyance
comment by Annoyance · 2009-08-13T15:32:05.108Z · LW(p) · GW(p)

That traditional anecdote (and its modified forms) only illustrate how little the pro-qualia advocates understand the arguments against the idea.

Dismissing 'qualia' does not, as many people frequently imply, require dismissing the idea that sensory stimuli can be distinguish and grouped into categories. That would be utterly absurd - it would render the senses useless and such a system would never have evolved.

All that's needed to is reject the idea that there are some mysterious properties to sensation which somehow violate basic logic and the principles of information theory.

Replies from: Psychohistorian, PrawnOfFate
comment by Psychohistorian · 2009-08-13T21:28:04.623Z · LW(p) · GW(p)

My understanding of qualia is that mysterious is not a definitional property, i.e. "Qualia can be explained in a reductionist sense" is not a self-contradictory statement. The existence of qualia simply means that sense-experience is a meaningful event, not that it is a supernatural one.

My view is that Mary's Room is fundamentally mistaken; what red looks like is a fact about Mary's brain, not about light of a certain wavelength. Mary can know everything there is to know about that wavelength of light without knowing the experience of a certain combination of neurons firing. Since we don't actually live in Mary's brain, we can't understand the qualia of "Mary's brain being stimulated by red light", but this is a limitation on our brains, not a "mystery." Perhaps a conscious being could exist that could construct others' brains and experience their qualia; we just don't know. But still, the fact that qualia are a potentially non-replicable hardware feature does not make them somehow supernatural.

Replies from: thomblake
comment by thomblake · 2009-08-14T14:45:09.519Z · LW(p) · GW(p)

I take a different but compatible objection to Mary's Room - that is, as Mary is said to know everything there is to know about the color red, she therefore knows exactly what it would be like to experience it, and so is not surprised.

comment by PrawnOfFate · 2013-04-22T13:21:17.418Z · LW(p) · GW(p)

All that's needed to is reject the idea that there are some mysterious properties to sensation which somehow violate basic logic and the principles of information theory.

Blatant strawman.

comment by tolstoshev · 2009-08-10T18:56:31.379Z · LW(p) · GW(p)

The example with the unrepentant drunk reminds me of this joke:

A hunter goes into the woods to hunt a bear. He carries his trusty 22-gauge rifle with him. After a while, he spots a very large bear, takes aim, and fires. When the smoke clears, the bear is gone.

A moment later the bear taps the hunter on the shoulder and says, "No one shoots at me and gets away with it. You have two choices: I can rip your throat out and eat you, or you can drop your trousers, bend over, and I'll do you in the ass."

The hunter decides that anything is better than death, so he drops his trousers and bends over, and the bear does what he said he would do. After the bear has left, the hunter pulls up his trousers again and staggers back into town. He's pretty mad.

He buys a much larger gun and returns to the forest. He sees the same bear, aims, and fires. When the smoke clears, the bear is gone. A moment later the bear taps the hunter on the shoulder and says,

"You know what to do."

Afterwards, the hunter pulls up his trousers, crawls back into town, and buys a bazooka. Now he's really mad. He returns to the forest, sees the bear, aims, and fires. The force of the bazooka blast knocks him flat on his back. When the smoke clears, the bear is standing over him and says,

"You're not doing this for the hunting, are you?"

comment by RHollerith (rhollerith_dot_com) · 2009-08-09T17:56:09.663Z · LW(p) · GW(p)

My conscious mind is more disconnected from my natural human feelings and my natural human motives and agendas than most people's is. As best as I can tell, that makes it more difficult for me to motivate myself to do things that any sane person would agree I need to do regardless of the details of what my goals or motives are. But also, as best as I can tell, my conscious mind is also significantly less prone to self-deception than most people's is. (And yes, I realize that in this community, that is a boast.) The fact that the conscious mind tended to serve in the EEA as the public-relations officer of the mind does not in any way make it less probable that every human is completely dependent on the conscious mind for rationality -- at least the kind of rationality necessary for science, philanthropy and effective pursuit of long-term self-interest.

Replies from: taryneast, MichaelVassar
comment by taryneast · 2011-07-26T17:23:19.923Z · LW(p) · GW(p)

My conscious mind is more disconnected from my natural human feelings and my natural human motives and agendas than most people's is.

I'm curious: in what way? Can you give examples?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2011-07-26T18:14:26.032Z · LW(p) · GW(p)

My conscious mind is more disconnected from my natural human feelings and my natural human motives and agendas than most people's is.

I'm curious: in what way? Can you give examples?

I wrote that 23 months ago, and I no longer consider it true. Sorry for the drama.

comment by MichaelVassar · 2009-08-10T04:16:22.185Z · LW(p) · GW(p)

Very important points.

comment by SilasBarta · 2009-08-09T01:42:53.543Z · LW(p) · GW(p)

I like how you've identified the subtle value judgment in a supposedly value-free scientific belief.

However, you lost me at the end here:

But notice how the theory you choose also has serious political implications. Consider how each of the two ways of looking at the problem would treat this example:

A wealthy liberal is a member of many environmental organizations, and wants taxes to go up to pay for better conservation programs. However, she can't bring herself to give up her gas-guzzling SUV, and is usually too lazy to sort all her trash for recycling.

Either side would say(after becoming sufficiently informed and thinking about the issue long enough):

"Ms. SUV Liberal's consumption makes no noticeable difference to the environment. The only way to achieve her environment goals is through collective action -- i.e., add a 'cooperation enforcement mechanism' to this Prisoner's Dilemma. Otherwise, individual 'cooperation' (reducing fuel consumption, recycling, etc.) simply rewards everyone else who 'defects', and the desired state of 'sustainable global society' is an unstable node.

"Ms. SUV Liberal's decision to drive an SUV and not recycle might have symbolic value, but that dynamic was not the focus of your example, and therefore, her driving of an SUV is not holding back progress toward her professed goal saving the environment."

Is there another political example you can give?

I myself throw my support squarely behind the Naive Theory. Conscious minds are potentially rational, informed by morality, and qualia-laden. Unconscious minds aren't, so who cares what they think?

Because of your clarification in footnote 5, I agree with the point you're making here, but I think you've spoken too broadly. Unconscious minds do a lot of difficult, useful cognitive labor: pattern recognition, regularity detection, and yes, even value judgments. While we'll often be able to identify where the unconscious mind is not acting optimally, that's a far cry from "who cares what the unconscious thinks".

comment by lukeprog · 2011-06-18T16:15:41.305Z · LW(p) · GW(p)

A ton of recent work in social psychology and social neuroscience suggests that quite a bit of the 'public relations officer' in the brain is processed unconsciously. I'll probably write a post on this eventually.

comment by DonGeddis · 2009-08-09T18:50:56.781Z · LW(p) · GW(p)

I think there's also a short-term/long-term thing going on with your examples. The drunk really wants to drink in the moment; they just don't enjoy living with the consequences later. Similarly, in the moment, you really do want to continue reading Reddit; it's only hours or days later that you wish you had also managed to complete that other project which was your responsibility.

I bet there's something going on here, about maximizing integrated lifetime happiness, vs. in-the-moment decision-making, possibly with great discounts to those future selves who will suffer the negative effects.

comment by haig · 2009-08-09T06:09:21.014Z · LW(p) · GW(p)

The Cynic's Theory may in fact describe a true state of mind, but it is not describing akrasia. The Cynic's Theory might better describe those minds whose preferences are placed by exterior influences that conflict with their internal, consciously hidden preferences. An example may be someone who always thought they wanted to be a doctor but deep down knew they wanted to be an artist.

However, when I think of Akrasia, I don't think of incompatible goals or hidden preferences, I think of compatible goals but an inability to consciously exert control of your willpower in achieving the agreed upon goals. When you finally stop procrastinating and get going, you feel wonderful and wonder why you couldn't have done it sooner--but then you go through the same problem the next time again. Akrasia is a problem of forming/eliminating automatic behaviors, aka habits. So in my opinion, the Cynic's theory does not shed any light on the problem of akrasia.

Replies from: gwern
comment by gwern · 2009-08-10T06:44:25.457Z · LW(p) · GW(p)

OK, so your major piece of evidence arguing against the 'conflicting-minds' paradigm is that once we conquer some akrasia and get started, we 'feel wonderful'?

I don't think that works. Akrasia is about things we do enjoy, and also about things we don't enjoy.

I have akrasia about going to my Taekwondo classes, even though I know perfectly well that I'll enjoy them once I'm there. But I also have akrasia about things I don't enjoy doing (like working through homework problems) - and this latter case is by far the majority of akrasia instances.

The former is easily explained by different time-preferences - one part of me prefers the here and now, while another part recognizes that stopping whatever I'm doing, getting ready, and going to class will lead to a more enjoyable and healthy hour than my current activity. And the latter is easily explained the same way by multiple factions as well, as simply one faction valuing the abstract utility or long-term consequences over avoiding the short-term disutility.

Forming/eliminating habits has nothing to do with it, except as a tactic to support one side over the other. ('I don't want to go to Taekwondo!' 'But this is what we usually do at this time, and someone's waiting - come along already.') And this insight - that there are multiple factions - is the contribution of the naive/cynical theory. Once we know that, we can figure out how to exploit the stupidity or greed of the disfavored faction.

Replies from: haig
comment by haig · 2009-08-11T06:37:53.788Z · LW(p) · GW(p)

You might have misunderstood me. I did not limit akrasia to only things we enjoy. I said actually getting going on the task, whether inherently enjoyable or not, is what 'feels wonderful'. I hate going to the dentist, but actually engaging in the process of going to the office and getting it over with feels pretty good as an accomplishment.

And forming the habit of not procrastinating is a very big part of it, IMO. To stop putting things off and automatically jump into a task is a positive habit that does a great deal against akrasia. Why do you think juvenile delinquents get sent off to boot camp or some other long period of regimented experience. To form those habits which will mold their character accordingly.

Replies from: gwern
comment by gwern · 2009-08-12T23:01:42.472Z · LW(p) · GW(p)

Would it be evidence against your theory that the benefits of boot camp are not clear for juvenile delinquents?

http://en.wikipedia.org/wiki/Boot_camp_%28correctional%29#Criticisms

comment by billswift · 2009-08-09T05:32:44.110Z · LW(p) · GW(p)

I realized as soon as I commented that the quote you opened with was too simplistic, that there was more than one possible explanation for "akrasia". I expanded it into a post on my blog, here - http://williambswift.blogspot.com/2009/08/akrasia-as-revealed-preference.html . (I tried posting things on Less Wrong twice, neither time did I get any comments or votes, so I don't even know if I did it right, so that anyone else could even see them. )

EDIT: I still don't believe in akrasia as lack of willpower or weakness of will, just that there are more confounding preferences than just signaling preferences.

Replies from: cousin_it, SoullessAutomaton
comment by cousin_it · 2009-08-10T15:34:15.949Z · LW(p) · GW(p)

Most likely you posted to "Drafts for billswift" rather than "Less Wrong". It's right next to the "Submit" button in the editor.

comment by SoullessAutomaton · 2009-08-09T15:17:48.192Z · LW(p) · GW(p)

(I tried posting things on Less Wrong twice, neither time did I get any comments or votes, so I don't even know if I did it right, so that anyone else could even see them. )

It doesn't look to me like you've submitted anything so I'm going with "you did it wrong". I recall people mentioning that it's apparently a somewhat nonintuitive process and it's easy to make it look like you submitted something when only you can see it, so you're in good company.

comment by AndrewKemendo · 2009-08-09T01:51:33.009Z · LW(p) · GW(p)

So why prefer one theory over the other?

I doubt it is necessary to need to choose one or the other. I would imagine that both theories could be well represented in the population.

Conscious minds are potentially rational, informed by morality, and qualia-laden. Unconscious minds aren't, so who cares what they think?

Are you suggesting that unconscious motivation cannot be changed by the "rational" conscious? I would argue that any significant behavioral modification is doing exactly this. One example that I think demonstrates this well is martial arts training. In the beginning the trainee unconsciously flinches and turns away from an oncoming threat while the trained expert tracks and deflects the threat unconsciously.

comment by ILikeLogic · 2016-01-29T20:47:35.027Z · LW(p) · GW(p)

Often in psychotherapy a person's goal is to resolve a conflict between the unconscious mind and the conscious mind in favor of the conscious mind. You may hear it called an irrational unconscious belief. Someone may unconsciously feel unworthy of respect and acceptance but they consciously believe that this is irrational. What is interesting is that psychotherapy can work exactly as desired if the logic of the unconscious belief can be made fully conscious. It will not happen through mere deduction however. It has to be done by consciously accepting the feeling that results from the unconscious belief and embracing it. Then the unconscious logic can be clarified. The conscious mind has resources to test the veracity and validity of unconscious beliefs. The unconscious itself cannot do this. So an unconscious belief will usually remain unchanged even if one is aware that it is problematic. It is the psychological equivalent of debugging faulty code. The code will not change just because the user is frustrated by it. It will only change if a programmer edits it and runs the edited version in place of the faulty version. That is the major obstacle in psychotherapy. It is getting the code into the editor, so to speak. The logical flaws in irrational unconscious beliefs are not difficult to see. They are usually obvious. What is difficult is getting them clear in consciousness. It doesn't happen naturally. That is why things like mindfulness and Gendlin's Focusing are considered very useful by many psychotherapists. The real obstacle is the making conscious of the implicit unconscious beliefs.

comment by A1987dM (army1987) · 2012-11-23T23:39:54.778Z · LW(p) · GW(p)

I mostly agree, but there's another criterion to consider: would the person feel better after following their conscious preferences, or after following their revealed preferences?

After I spend the whole afternoon lying in bed with my laptop idly browsing the Web/watching sitcoms/etc., I feel much, much worse than if I had noticed I was wasting my time, shut my laptop down, taken a nap, and then done something fulfilling. So, in that case, it's my conscious preferences that are right. (This doesn't happen that often now that I have LeechBlock installed.)

OTOH, when three days before an exam a friend of mine phoned me and said, “What are you doing tonight? Don't tell me you're staying home to study, 'cause I know that when you do that you end up spending all of the time on Facebook”, I didn't answer “Yes, I'm staying home to study, and no, I've replaced my Facebook password with a random one and mailed it to myself in the future so I won't be able to log in until then”, and I did go out with them; and three days later I aced the exam all the same (even in spite of this). So in that occasion, my revealed preferences were right, and following through with my plan to outsmart myself (i.e. staying home and study, with myself locked out of Facebook) would have been a net negative (i.e. I wouldn't have done better at the exam, but I would have missed the night out).

ISTM that the former scenario (conscious and revealed preferences disagreeing, with the former being right) is waaay more common than the latter. Unless you go overboard with precommitment devices, as I've been doing lately!

comment by RHollerith (rhollerith_dot_com) · 2009-08-09T18:53:38.902Z · LW(p) · GW(p)

When I am feeling poorly, there is a part of my mind that seem to be able to veto pretty much any activity I am engaged in except for primitive motor actions. The activities that get the veto seem to be the kinds of activities that would scare or repel a small boy. Even when I don't feel particularly poorly, my trying to do something extremely scary or repellent to the little boy will probably draw a veto.

The part with the veto power, which I sometimes refer to as the Saboteur, seems to be able to flush my working memory. For example, it can cause me to forget where I put something I had in my hand a moment ago. The thing I had in my hand tends to be something I need to continue with the activity the little boy is trying to veto. If the little boy is putting up a particularly strong fight, then after I retrieve the item, I often find to my amazement that (for no good reason that I can imagine) I have put it down (again) in a different place, but (again) even though I just put it down, I cannot recall where. I recall going through four cycles of misplacing an item and retrieving it, one cycle right after the other. I have some brain damage, which probably significantly impairs my working memory, and I am currently very confused about how many of these "sabotage incidents" would have happened if I had not incurred the brain damage. Obviously, if none of them would have happened if I had not incurred the brain damage, "sabotage incident" is a misnomer, and I am assigning an agenda or a motive to cognitive impairments that in reality have no agenda or motive behind them. I frequently forget why I got up out my chair. I frequently forget whether the pills I just swallowed contain important pill X. Most of these failures of working memory have no rebellious or sabotaging motive behind them: the question is whether some of them do. If the reader has any insights into this, I am all ears.

When I say that the kinds of things that seem to get a veto seem to be the kinds of things that would scare a young boy, the reader will tend to start to suspect that I had a brutalizing childhood, and the reader would be right.

I seem to be in a mood for self-disclosure today. I publish this only because Richard Hollerith is an alias that I do not plan to use for, e.g., job hunting and because I made a note to re-read this comment at a later date to re-consider whether I want it on the public internet. I ask everyone not to quote the personal parts of this comment because of course if I do decide to delete or prune the original, I would be unable to delete the quotes.

comment by [deleted] · 2014-01-20T17:01:31.753Z · LW(p) · GW(p)

I propose that everyone who claims to believe in the Cynic's Theory self-modify to lack any conscious signalling system, or to stop caring what signals others receive from them in any deep way. Perhaps brain surgery to achieve their desired sociopathy?

Except nobody does that, so their revealed preference from not becoming sociopaths is to side with their conscious mind over their unconscious one.

Replies from: blacktrance
comment by blacktrance · 2014-01-20T17:09:25.845Z · LW(p) · GW(p)

I propose that everyone who claims to believe in the Cynic's Theory self-modify to lack any conscious signalling system, or to stop caring what signals others receive from them in any deep way.

Why would they want to do that? This kind of signaling can be socially advantageous. It's not hard to imagine that someone would prefer people's opinion of them to be "Poor guy, he can't help himself" than "He's so evil he likes alcohol more than his family".

Replies from: None
comment by [deleted] · 2014-01-20T22:06:09.666Z · LW(p) · GW(p)

The point is not whether it's advantageous to do so. The point is that you would never want to do so. The Cynic's Theory of human irrationality says, "We're all horrible people but we mask it with social signalling", but none of the Theory's believers, given the choice, would fully embrace their own supposed horribleness.

Replies from: blacktrance
comment by blacktrance · 2014-01-20T22:50:41.016Z · LW(p) · GW(p)

"We're all horrible people but we mask it with social signaling" is not an accurate statement of the Cynic's Theory. From what I've seen, Cynic's Theory can be stated as "When people claim weakness of will, their actions are what they really prefer, and their claim of akrasia is signaling", which is orthogonal to being horrible. For example, one can imagine an inefficient SS guard at a concentration camp who wants to kill as few Jews as possible (without getting himself in trouble, or he knows that he'd be replaced with someone worse), so he says something like, "I know I should kill Jews, but my will is weak and it's easy to get distracted". This is a more virtuous case of falsely claiming akrasia.

"I am socially expected to do X, but I prefer to do Y, so I will do Y and claim weakness of will" says nothing about whether X is morally better or worse than Y, so "Cynic's Theory" is a misnomer. "Skeptic's Theory" is a more accurate name.

As for stopping caring about what signals others receive - that would be a harmful self-modification, because the signals people receive affect how they treat you, and it's hard not to care about at least some of that.

comment by Nanani · 2009-08-10T02:02:14.883Z · LW(p) · GW(p)

Does it even have to go as far as concious vs unconcious mind?

As the saying goes, do not attribute to malice that which can be explained by stupidity.

The drunk in your example is far more likely failing on a much simpler level: not taking long term considerations into account, not updating when steps taken to stop drinking fail to work, and therefore not trying a different approach, repeating the same triggers (like going to the bar) and expecting too much out of his willpower reserves, and so on. Being drunk also hampers efforts.

Perhaps akrasia sufferers here have cause to look for a conciousness-based explanation, but only because they have presumably gone through the list of biases and failiure methods and applied rational ways of correcting them. This is not true in the general case of an ordinary human.

comment by JamesAndrix · 2009-08-09T15:08:08.175Z · LW(p) · GW(p)

I don't like the idea of becoming the kind of entity that consistently decides not to do the fun frivolous thing.

If a general decides that it's more strategically important to defend a particular bridge than a particular city, He could self modify so that he no longer cares about the city, so that those desires don't get in the way of defending the bridge. He can only do that so many times before he no longer cares about the nation, just 'defending' it.

Replies from: kpreid
comment by kpreid · 2009-08-09T16:25:14.345Z · LW(p) · GW(p)

I don't like the idea of becoming the kind of entity that consistently decides not to do the fun frivolous thing.

Then don't.

If a general decides that it's more strategically important to defend a particular bridge than a particular city, He could self modify so that he no longer cares about the city, so that those desires don't get in the way of defending the bridge. He can only do that so many times before he no longer cares about the nation, just 'defending' it.

In general, this is subgoal stomp. The hypothetical general's error is in discarding what he starts out wanting (supporting his nation, I assume) in favor of wanting what he initially thinks will help achieve it (military defense of specific positions).

(I'm pretty sure humans do this already, even without general self-modification.)

If you don't want to lose what you care about, don't change what you care about.

Replies from: JamesAndrix
comment by JamesAndrix · 2009-08-09T17:20:50.930Z · LW(p) · GW(p)

Then don't.

Well, I'm also concerned about other people here doing that.

If you don't want to lose what you care about, don't change what you care about.

Isn't that what anti-akrasia does though? If I like coffee but dislike some effect of coffee and I selfmodify into some who at least doesn't drink coffee, and maybe doesn't like it anymore, then I think I've lost something for something else, but not things in an easy to parse subgoal-supergoal relationship.

A general choosing between two cities might have been a better example.

In other words, you should switch the train to the tracks with one person instead of five, but you shouldn't self modify to so that it is an easy thing to do.

Replies from: pjeby, Psychohistorian
comment by pjeby · 2009-08-10T11:58:38.904Z · LW(p) · GW(p)

Isn't that what anti-akrasia does though? If I like coffee but dislike some effect of coffee and I selfmodify into some who at least doesn't drink coffee, and maybe doesn't like it anymore, then I think I've lost something for something else, but not things in an easy to parse subgoal-supergoal relationship.

The wonderful thing about the brain is that if what you get out of something is actually important to you, you probably won't succeed in getting rid of it for long, or will find some other way to get whatever you got out of it before.

(That's also the really terrible thing about the brain, since that same principle is also where akrasia and "meta" akrasia come from!)

comment by Psychohistorian · 2009-08-10T16:55:34.967Z · LW(p) · GW(p)

If I like coffee but dislike some effect of coffee and I selfmodify into some who at least doesn't drink coffee, and maybe doesn't like it anymore, then I think I've lost something for something else

There's a big qualitative difference between not doing something and not liking it. This is based on personal experience, so YMMV, but if you can self-modify to the point of not liking something that you think you should avoid (coffee, sweets, etc), you actually experience positive utility from disliking it. Merely avoiding it may cause some difficulty, but actually self-modifying to dislike a perceived bad thing feels good.

The counterfactual can't be compared too easily, but if it's something you were feeling guilt over, you're probably better off.

Replies from: JamesAndrix
comment by JamesAndrix · 2009-08-10T22:19:25.471Z · LW(p) · GW(p)

But how far can you go down that road before you're not human any more? (in a bad way) Coffee isn't a bad thing, it's a good thing with some side effects people don't like. Self modifying to no longer like a good thing may be good, but it seems unstable.

what if we happened to live in a world where everything that tasted good was bad for us, would we be better off eradicating taste? What if the things we don't yet know about the universe have similar features?

Replies from: taryneast
comment by taryneast · 2011-07-26T17:34:54.432Z · LW(p) · GW(p)

Yes good point. This reminds me of a question once brought up at the meetup:

"If you could modify yourself so that you really liked working hard to improve the world, over your current life enjoyments, would you?"

The idea being - would you modify yourself so you valued, say, working for charities, rather than playing computer games and listening to music (or whatever it is you do now instead of working at the local soup kitchen every weekend)?

If you did it - you know that you would actually be made happy by doing the kinds of things you currently think you "should do more of but don't get around to very often".

My gut reaction is to flinch away from doing this... I'd be interested in exploring why that is... but have no idea really where to start.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-07-26T18:53:40.402Z · LW(p) · GW(p)

Well, a somewhat obvious answer is that you might fear that valuing doing the things you think you should do more of will leave you worse off by your current standards.

Replies from: taryneast
comment by taryneast · 2011-07-26T21:32:54.885Z · LW(p) · GW(p)

Hmm, not sure of that's it. According to my current set of "wouldn't it be awesome if" standards, I'd actually be much better off. I do get a feeling that I'd be less "real" though -> programmed to be a task-loving machine instead of a person with real, human desires.

However I certainly see the potential for "ick" scenarios where I would consider myself to be worse off (in a "poor sad person" kind of way) - eg programming myself to love housework - and ending up loving it so much that I turn myself into a slave/maid for other people who take advantage of it... poor example perhaps, but hopefully you get the drift.

comment by randallsquared · 2009-08-09T01:34:29.854Z · LW(p) · GW(p)

I suppose I can only make my point if you often have the same experience, or if you've caught someone else fighting akrasia when they didn't know you were there.

Indeed. But as you say, it's mostly whether you identify more strongly with the part of you that wants X (a drink, a cigarette, or, in my case, a bag of Utz Medley chips...) or the part of you that wants to be sober or healthy.

It's not uncommon for me to say, "Hey, I want X!" and then after my actions reveal that I didn't want X as much as something else, to say, "Wow, I guess I was wrong and didn't really want X."

Edit: my point here is that almost everyone sometimes has the experience of not knowing what they want, so it's easy to say that people also often have incorrect beliefs about their own preferences.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-08-09T02:03:19.882Z · LW(p) · GW(p)

A lot of that comes down to how you define the "self".

Most of the time, when people speak of themselves in the first person, they seem to be referring to the reflective observer, the audience of the internal narrative, Hofstadter's strange loops, whatever you want to call it.

To what extent that observer is just along for the ride and subject to the whims of an arational but clever ape that calls the shots I don't know, but a lot of confusion arises from fuzzy use of the first person.

Replies from: SilasBarta
comment by SilasBarta · 2009-08-09T04:38:41.313Z · LW(p) · GW(p)

a lot of confusion arises from fuzzy use of the first person.

Indeed. Like many paradoxes, the whole problem of akrasia (both the philosophical side and the personal-life side) may very well boil down to the subtle assumptions invoked by the very use of the category "self".

comment by huono_ekonomi · 2009-08-10T08:59:47.650Z · LW(p) · GW(p)

Excellent point.

However, neuro professionals seem to prefer more complex explanations, and avoid using the term unconscious.

To me the theory that seems most consistent with research and my personal experience is that we have multiple selves (self modes/ego states/roles etc.), and only one of them is active at any time. Single continuous consciousness or self is an illusion of the mind.

Each self has it's own will. For example one self can decide to wake up early. If another self is active when we wake up, it can decide to do something else. Being consistent is not so much about will power, but about which self is active.

You can start to control your behavior better by first observing which self is active at any time, and later trying to control that, and not being identified by harmful selves, in other words by increasing your meta-cognitive capabilities.

You could say that your consciousness (or level of consciousness) increases when you are conscious about your consciousness. This is really very difficult, and in the beginning you can do only for short moments before you forget to be second-degree conscious again.

Replies from: Strange7
comment by Strange7 · 2014-01-23T00:38:32.021Z · LW(p) · GW(p)

It sounds as though not enough of your research explored the possibility of you having some manner of dissociative disorder.

Replies from: memoridem
comment by memoridem · 2014-01-23T04:57:45.867Z · LW(p) · GW(p)

A disorder would be a description of what the person is reporting, since you can't scan their brain to establish the diagnosis. An important problem with this approach is that we don't know whether there's an impaired processing of the social necessity called self, or whether the person just perceives or describes normal processing differently, or whether they label a different process with the word self than people normally do.

comment by timtyler · 2009-08-09T16:27:40.061Z · LW(p) · GW(p)

We do not have to choose between these two theories. Sometimes the conscious goals are best, and the unconscious procrastinates in an undesirable fashion. Sometimes the unconscious is doing what is best, while consciousness struggles to cover the actions with a veneer of acceptability, for example by dissassociating itself from them.

Replies from: huono_ekonomi
comment by huono_ekonomi · 2009-08-10T09:06:41.917Z · LW(p) · GW(p)

Few examples where "unconscious" beats "conscious" hands down are dancing and driving a car.

Replies from: Richard_Kennaway, Vladimir_Nesov, timtyler
comment by Richard_Kennaway · 2009-08-10T10:28:59.938Z · LW(p) · GW(p)

Few examples where "unconscious" beats "conscious" hands down are dancing and driving a car.

What is "unconscious" about either of those?

ETA: Both of them are physical and mental skills, deliberately learned. In this, they do not differ from learning yoga postures, learning a musical instrument, or learning any sport.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-08-10T21:32:17.923Z · LW(p) · GW(p)

You get better at yoga postures, playing a musical instrument, and playing sports when you do those things enough that the bits you commonly do over and over are picked up on by your subconscious so your conscious doesn't have to worry about them anymore. That's my guess.

comment by Vladimir_Nesov · 2009-08-10T18:32:52.351Z · LW(p) · GW(p)

You are thinking of different concepts from the ones discussed in the post. These are not conflicting goals or drives, these are different ways of implementing a skill: intuitive vs. deliberative (however they are properly called). Either of these can be deployed for the ends of either conscious or subconscious preferences.

comment by timtyler · 2009-08-10T17:39:35.613Z · LW(p) · GW(p)

A more classical example would be various kinds of illicit sex (underage, across marriage vows, etc). Probably good for the genes - but often needing some rationalisation in the face of conventional moral norms - and sometimes the indvidual's own sense of morality.

comment by MugaSofer · 2013-01-15T11:00:13.538Z · LW(p) · GW(p)

I never consciously think "Wow, I really like browsing Reddit...but I'll trick everyone else into thinking I'd rather be studying so I get more respect. Ha ha! The fools will never see it coming!"

I do (well, not Reddit ... I'm supposed to be studying now, in fact.) OTOH, when I genuinely decide "OK, enough, we actually need to study now" then I feel akrasia, which is quite a different sensation to deciding "eh, I can manage without studying as long as people expect me to."

So, y'know, evidence.

comment by PhilGoetz · 2009-08-09T15:14:12.963Z · LW(p) · GW(p)

Nice.

Replies from: gwern
comment by gwern · 2009-08-10T06:46:27.084Z · LW(p) · GW(p)

That's what the votes are for, man.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-17T20:31:11.970Z · LW(p) · GW(p)

Votes are anonymous.