Cached Selves

post by AnnaSalamon · 2009-03-22T19:34:19.719Z · LW · GW · Legacy · 81 comments

Contents

81 comments

by Anna Salamon and Steve Rayhawk (joint authorship)

Related to: Beware identity

Update, 2021: I believe a large majority of the priming studies failed replication, though I haven't looked into it in depth. I still personally do a great many of the "possible strategies" listed at the bottom; and they subjectively seem useful to me; but if you end up believing that it should not be on the basis of the claimed studies.

A few days ago, Yvain introduced us to priming, the effect where, in Yvain’s words, "any random thing that happens to you can hijack your judgment and personality for the next few minutes."

Today, I’d like to discuss a related effect from the social psychology and marketing literatures: “commitment and consistency effects”, whereby any random thing you say or do in the absence of obvious outside pressure, can hijack your self-concept for the medium- to long-term future.

To sum up the principle briefly: your brain builds you up a self-image. You are the kind of person who says, and does... whatever it is your brain remembers you saying and doing.  So if you say you believe X... especially if no one’s holding a gun to your head, and it looks superficially as though you endorsed X “by choice”... you’re liable to “go on” believing X afterwards.  Even if you said X because you were lying, or because a salesperson tricked you into it, or because your neurons and the wind just happened to push in that direction at that moment.

For example, if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself.  If my friends ask me what I think of their poetry, or their rationality, or of how they look in that dress, and I choose my words slightly on the positive side, I’m liable to end up with a falsely positive view of my friends.  If I get promoted, and I start telling my employees that of course rule-following is for the best (because I want them to follow my rules), I’m liable to start believing in rule-following in general.

All familiar phenomena, right?  You probably already discount other peoples’ views of their friends, and you probably already know that other people mostly stay stuck in their own bad initial ideas.  But if you’re like me, you might not have looked carefully into the mechanisms behind these phenomena.  And so you might not realize how much arbitrary influence consistency and commitment is having on your own beliefs, or how you can reduce that influence.  (Commitment and consistency isn’t the only mechanism behind the above phenomena; but it is a mechanism, and it’s one that’s more likely to persist even after you decide to value truth.)

Consider the following research.

In the classic 1959 study by Festinger and Carlsmith, test subjects were paid to tell others that a tedious experiment has been interesting.  Those who were paid $20 to tell the lie continued to believe the experiment boring; those paid a mere $1 to tell the lie were liable later to report the experiment interesting.  The theory is that the test subjects remembered calling the experiment interesting, and either:

  1. Honestly figured they must have found the experiment interesting -- why else would they have said so for only $1?  (This interpretation is called self-perception theory.), or
  2. Didn’t want to think they were the type to lie for just $1, and so deceived themselves into thinking their lie had been true.  (This interpretation is one strand within cognitive dissonance theory.)

In a follow-up, Jonathan Freedman used threats to convince 7- to 9-year old boys not to play with an attractive, battery-operated robot.  He also told each boy that such play was “wrong”.  Some boys were given big threats, or were kept carefully supervised while they played -- the equivalents of Festinger’s $20 bribe.  Others were given mild threats, and left unsupervised -- the equivalent of Festinger’s $1 bribe.  Later, instead of asking the boys about their verbal beliefs, Freedman arranged to test their actions.  He had an apparently unrelated researcher leave the boys alone with the robot, this time giving them explicit permission to play.  The results were as predicted.  Boys who’d been given big threats or had been supervised, on the first round, mostly played happily away.  Boys who’d been given only the mild threat mostly refrained.  Apparently, their brains had looked at their earlier restraint, seen no harsh threat and no experimenter supervision, and figured that not playing with the attractive, battery-operated robot was the way they wanted to act.

One interesting take-away from Freedman’s experiment is that consistency effects change what we do -- they change the “near thinking” beliefs that drive our decisions -- and not just our verbal/propositional claims about our beliefs.  A second interesting take-away is that this belief-change happens even if we aren’t thinking much -- Freedman’s subjects were children, and a related “forbidden toy” experiment found a similar effect even in pre-schoolers, who just barely have propositional reasoning at all.

Okay, so how large can such “consistency effects” be?  And how obvious are these effects -- now that you know the concept, are you likely to notice when consistency pressures change your beliefs or actions?

In what is perhaps the most unsettling study I’ve heard along these lines, Freedman and Fraser had an ostensible “volunteer” go door-to-door, asking homeowners to put a big, ugly “Drive Safely” sign in their yard.  In the control group, homeowners were just asked, straight-off, to put up the sign.  Only 19% said yes.  With this baseline established, Freedman and Fraser tested out some commitment and consistency effects.  First, they chose a similar group of homeowners, and they got a new “volunteer” to ask these new homeowners to put up a tiny three inch “Drive safely” sign; nearly everyone said yes.  Two weeks later, the original volunteer came along to ask about the big, badly lettered signs -- and 76% of the group said yes, perhaps moved by their new self-image as people who cared about safe driving.  Consistency effects were working.

The unsettling part comes next; Freedman and Fraser wanted to know how apparently unrelated the consistency prompt could be.  So, with a third group of homeowners, they had a “volunteer” for an ostensibly unrelated non-profit ask the homeowners to sign a petition to “keep America beautiful”.  The petition was innocuous enough that nearly everyone signed it.  And two weeks later, when the original guy came by with the big, ugly signs, nearly half of the homeowners said yes -- a significant boost above the 19% baseline rate.  Notice that the “keep America beautiful” petition that prompted these effects was: (a) a tiny and un-memorable choice; (b) on an apparently unrelated issue (“keeping America beautiful” vs. “driving safely”); and (c) two weeks before the second “volunteer”’s sign request (so we are observing medium-term attitude change from a single, brief interaction).

These consistency effects are reminiscent of Yvain’s large, unnoticed priming effects -- except that they’re based on your actions rather than your sense-perceptions, and the influences last over longer periods of time.  Consistency effects make us likely to stick to our past ideas, good or bad.  They make it easy to freeze ourselves into our initial postures of disagreement, or agreement.  They leave us vulnerable to a variety of sales tactics.  They mean that if I’m working on a cause, even a “rationalist” cause, and I say things to try to engage new people, befriend potential donors, or get core group members to collaborate with me, my beliefs are liable to move toward whatever my allies want to hear.

What to do?

Some possible strategies (I’m not recommending these, just putting them out there for consideration):

  1. Reduce external pressures on your speech and actions, so that you won’t make so many pressured decisions, and your brain won’t cache those pressure-distorted decisions as indicators of your real beliefs or preferences.  For example:
    • 1a.  Avoid petitions, and other socially prompted or incentivized speech.  Cialdini takes this route, in part.  He writes: “[The Freedman and Fraser study] scares me enough that I am rarely willing to sign a petition anymore, even for a position I support.  Such an action has the potential to influence not only my future behavior but also my self-image in ways I may not want.”
    • 1b.  Tenure, or independent wealth.
    • 1c.  Anonymity.
    • 1d.  Leave yourself “social lines of retreat: avoid making definite claims of a sort that would be embarrassing to retract later.  Another tactic here is to tell people in advance that you often change your mind, so that you’ll be under less pressure not to.
  2. Only say things you don’t mind being consistent with.  For example:
    • 2a.  Hyper-vigilant honesty.  Take care never to say anything but what is best supported by the evidence, aloud or to yourself, lest you come to believe it.
    • 2b.  Positive hypocrisy.  Speak and act like the person you wish you were, in hopes that you’ll come to be them.  (Apparently this works.)
  3. Change or weaken your brain’s notion of “consistent”.  Your brain has to be using prediction and classification methods in order to generate “consistent” behavior, and these can be hacked.
    • 3a.  Treat $1 like a gun.  Regard the decisions you made under slight monetary or social incentives as like decisions you made at gunpoint -- decisions that say more about the external pressures you were under, or about random dice-rolls in your brain, than about the truth.  Take great care not to rationalize your past actions.
    • 3b.  Build emotional comfort with lying, so you won’t be tempted to rationalize your last week’s false claim, or your next week’s political convenience.  Perhaps follow Michael Vassar’s suggestion to lie on purpose in some unimportant contexts.
    • 3c.  Reframe your past behavior as having occurred in a different context, and as not bearing on today’s decisions.  Or add context cues to trick your brain into regarding today's decision as belonging to a different category than past decisions.  This is, for example, part of how conversion experiences can help people change their behavior.  (For a cheap hack, try traveling.)
    • 3d.  More specifically, visualize your life as something you just inherited from someone else; ignore sunk words as you would aspire to ignore sunk costs.
    • 3e.  Re-conceptualize your actions into schemas you don’t mind propagating.  If you’ve just had some conversations and come out believing the Green Sky Platform, don’t say “so, I’m a green sky-er”.  Say “so, I’m someone who changes my opinions based on conversation and reasoning”.  If you’ve incurred repeated library fines, don’t say “I’m so disorganized, always and everywhere”.  Say “I have a pattern of forgetting library due dates; still, I’ve been getting more organized with other areas of my life, and I’ve changed harder habits many times before.”
  4. Make a list of the most important consistency pressures on your beliefs, and consciously compensate for them.  You might either consciously move in the opposite direction (I know I’ve been hanging out with singularitarians, so I somewhat distrust my singularitarian impressions) or take extra pains to apply rationalist tools to any opinions you’re under consistency pressure to have.  Perhaps write public or private critiques of your consistency-reinforced views (though Eliezer notes reasons for caution with this one).
  5. Build more reliably truth-indicative types of thought.  Ultimately, both priming and consistency effects suggest that our baseline sanity level is low; if small interactions can have large, arbitrary effects, our thinking is likely pretty arbitrary to to begin with.  Some avenues of approach:
    • 5a.  Improve your general rationality skill, so that your thoughts have something else to be driven by besides your random cached selves.  (It wouldn’t surprise me if OB/LW-ers are less vulnerable than average to some kinds of consistency effects.  We could test this.)
    • 5b.  Take your equals’ opinions as seriously as you take the opinions of your ten-minutes-past self.  If you often discuss topics with a comparably rational friend, and you two usually end with the same opinion-difference you began with, ask yourself why. An obvious first hypothesis should be “irrational consistency effects”: maybe you’re holding onto particular conclusions, modes of analysis, etc., just because your self-concept says you believe them.
    • 5c.  Work more often from the raw data; explicitly distrust your beliefs about what you previously saw the evidence as implying.  Re-derive the wheel, animated by a core distrust in your past self or cached conclusions.  Look for new thoughts.

81 comments

Comments sorted by top scores.

comment by Sideways · 2009-03-23T08:07:21.908Z · LW(p) · GW(p)

Another technique: thought quarantine. New ideas should have to endure an observation period and careful testing before they enter your repertoire, no matter how convincing they seem. If you adopt them too quickly, you risk becoming attached to them before you have a chance to notice their flaws.

I've found this to be particularly important with Eliezer's posts on OB and here. Robin Hanson's and most other posts are straightforward: they present data and then an interpretation. Eliezer's writing style also communicates what it feels like to believe his interpretation. The result is that after reading one of Eliezer's posts, my mind acts as though I believed what he was saying during the time I spend reading it. If I don't suspend judgment on Eliezer's ideas for a day or two while I consider them and their counter-arguments, they make themselves at home in my mind with inappropriate ease.

I won't go so far as to accuse Eliezer of practicing the dark arts--I agree that communicating experiences is worthwhile. I read OB for the quality of the prose as well as idea content, both of which stand out among blogs. But while this effect hasn't been articulated in comments as far as I know, I suspect that it contributes to the objections Robin Hanson and others have to Eliezer's style.

Replies from: Eliezer_Yudkowsky, MikeMitchell
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-23T08:37:25.347Z · LW(p) · GW(p)

I have to say, that goes a bit beyond what I intended. But that part where I communicate the experience is really important. I wonder if there's some way to make it a bit less darkish without losing the experiential communication?

Replies from: Demosthenes, Cameron_Taylor, Vladimir_Nesov
comment by Demosthenes · 2009-03-23T16:35:29.907Z · LW(p) · GW(p)

It may be time for a good Style vs Content Debate; first commenter to scream false dilemma gets a prize

comment by Cameron_Taylor · 2009-03-23T13:59:42.209Z · LW(p) · GW(p)

It would be terribly dissapointing to see effective communication subordinated to efforts to avoid being labelled dark. The communication of experience serves as an effective reminder that we are reading not an infalible surce of wisdom, but just some guy who writes blog posts.

I'd really prefer people spent effort trying not to give me biassed information than spending that effort trying to persuade me that they aren't dark.

comment by Vladimir_Nesov · 2009-03-23T17:19:57.487Z · LW(p) · GW(p)

To work on refining and verifying a theory, one first has to understand it, to be able to formulate theoretical conclusions given possible future data or arguments. While the validity of a theory in the current form isn't certain, moving towards resolution of this uncertainty requires mastery of the theory, and it's where the communication of experience comes in.

The direct analogy is with categorization: to measure the precision of a categorization technique, one first may need to implement the categorization algorithm. Resulting categories computed by the algorithm don't claim to be the truth, they are merely labels with aspirations for predictive power.

comment by MikeMitchell · 2020-06-21T11:46:24.757Z · LW(p) · GW(p)

I actually find Eliezer's vivid style easier to consider. He often builds a narrative in which I can at least try to poke holes, holes that are more difficult for me to find in Robin's more straightforward and general style. YMMV.

comment by Steve_Rayhawk · 2009-03-23T09:29:23.390Z · LW(p) · GW(p)

We left out one strategy because we didn't have scientific support for it. But introspectively:

3. Change or weaken your brain’s notion of “consistent”. Your brain has to be using prediction and classification methods in order to generate “consistent” behavior, and these can be hacked.

  • 3f. Your brain remembers which "simple" predictor best described your decision, so change the pool of predictors for describing your decisions that your brain counts as "simple".
    • 3fi. Learn to judge yourself, not by the best inference or decision you could have made in hindsight, but by the best inference or decision you could realistically have made at the time. This way, poorly-informed or impaired past decisions are evidence of your past poor information or impairment, and are not evidence that it is hopeless to try to make a better decision now that you have better information or are less impaired. To help your brain count this standard of judgment as "simple", you may wish to make a habit of judging everyone this way.
    • 3fii. Your brain learns to predict other peoples' judgments by learning which systems of predictive categories other people count as "natural". If you have to predict other peoples' judgments a lot, your brain starts to count their predictive categories as "natural". The effect can be viral (especially with categories whose social definitions tacitly refer to Schelling focal points, pooling equilibria, or punishment of non-punishers, with a penalty for disagreeing about the boundary), and it can change how you think about yourself. Try to control social access to your brain's pool of "simple", "natural" predictive categories, and try to unpack category definitions so that when they are not simple, your brain sees how. (Or try to live so that your most intense experiences of thinking only make predictions of physical consequences, and not predictions of other peoples' judgments.)

On this subject: Autistic spectrum conditions and obtuse concrete literalism are sometimes a good temporary defense against other peoples' unnatural category systems, but you should have a backup plan.

comment by JulianMorrison · 2009-03-24T15:10:50.371Z · LW(p) · GW(p)

Everybody's writing about removing this effect in the future. But how about in the past?

How much of your present day self concept is delusional (or at least arbitrary)?

comment by ubershmekel · 2009-04-03T08:05:20.867Z · LW(p) · GW(p)

I liked this article, though I might change my mind about that. I’m someone who changes my opinions based on conversation and reasoning.

comment by kurige · 2009-03-23T00:08:49.824Z · LW(p) · GW(p)

Great post.

Here's some additional reading that supports your argument:

Distract yourself. You're more honest about your actions when you can't exert the mental energies necessary to rationalize your actions.

And the (subconcious) desire to avoid appearing hypocritical is a huge motivator.

I've noticed this in myself often. I faithfully watched LOST through the third season, explaining to my friends who had lost interest around the first season that it was, in fact, an awesome show. And then I realized it kind of sucked.

comment by Scott Alexander (Yvain) · 2009-03-22T21:28:11.464Z · LW(p) · GW(p)

This is a great post, especially all of the technique suggestions.

Socially Required Token Disagreement: I'm especially surprised by the "drive safely" study - and it's especially weird since "keeping America beautiful" would seem to contradict putting a big ugly sign on your front lawn. Maybe the effect wasn't through the person's support for vague feel-good propositions, but through their changed attitude to following requests by strangers knocking on their door.

Replies from: PhilGoetz, tomcatfish
comment by PhilGoetz · 2009-03-23T05:11:24.481Z · LW(p) · GW(p)

Maybe seeing the little signs on their neighbors' yards made them believe signs were acceptable in their neighborhood.

Replies from: prase
comment by prase · 2009-03-23T10:57:38.643Z · LW(p) · GW(p)

Were all the signs placed in one neighbourhood? This could partly invalidate the test, since there could be an effect of discussing the issue with neighbours and changing opinions as a result of discussions, not solely as a result of a consistency effect. The study should have avoided this, choosing houseowners far from each other. Was it the case?

comment by Alex Vermillion (tomcatfish) · 2020-10-26T09:38:15.828Z · LW(p) · GW(p)

The result is surprising to me, so, like you, I looked to see if there were any theories that required less buy-in from me (in an attempt to minimize complexity).

Do these studies make sure that the actions are desirable? I can easily fit the data into a model in which people consider "work" to be a separate category of action in their head which they neither enjoy nor question.

In the sign example, a large sign could be reputational cost, or a future work to make your yard look as nice as it did, while a small and then large sign is two chances to enjoy supporting a cause. In the payment study, it could be true that people who were paid larger sums went into job-mode and performed services for money, while people paid a lesser amount looked to see if the action was actually enjoyable or not and learned they did.

This could easily be shown to be false if any of these studies pay varied sum for something very unpleasant, with the group that received the lesser amount reporting higher satisfaction.

comment by Michelle_Z · 2011-07-18T21:44:34.591Z · LW(p) · GW(p)

2b. Positive hypocrisy. Speak and act like the person you wish you were, in hopes that you’ll come to be them. (Apparently this works.)

This does work. I found that when I noticed I was quiet and didn't talk to people often, that I didn't like being that way. I wanted to reach out. It took four years to break the habit, but now my friends know me to be a generally "out spoken and outgoing" person. In other words, I had an image of what I wanted to be (more outgoing) and thought of what would an outgoing person do (talk to the person sitting next to me in class, get involved in a club, etc) and tried to slowly integrate those things into my life. I think the main point though, is that a person has to honestly want that image of themselves to become an actuality, otherwise the efforts end up falling short.

This was a very interesting post. I'm glad I saw it- it explains behaviors I have exhibited, and the behaviors of friends etc... I will definitely try to be more aware of this.

comment by Desrtopa · 2010-11-30T05:20:47.455Z · LW(p) · GW(p)

This is something I've noticed as a factor in my own behavior since I was a child. I never tried isolating myself from social influences to avoid pressure on my personality though, rather, at an early age, I consciously rejected the idea that I had a distinct "true" personality that existed irrespective of circumstance. My strategies mainly focused on creating my own pressures to conform to so that I could shape my contextual behavior in directions I wanted.

My first top level post is actually an example of this, since I created it not just to provoke discussion or enlighten, but to unpack my sense of humor. I tend to be happier in social circles where I express a sense of humor than ones where I don't, but while I could act like that whenever I want to, in practice, if I haven't set up a persona that it's consistent with, I usually don't.

I also tend to avoid anonymity online, so that I can use others' expectations of me to reinforce standards for my own behavior. I use the same online handle pretty much everywhere I go, so that I can enforce some continuity between my online personas, and so that my reputation everywhere on the internet rests on my behavior anywhere, and I can keep myself from descending into GIFT

comment by aausch · 2009-12-31T18:41:56.659Z · LW(p) · GW(p)

Whenever you are about to make a decision which you do not care much about, one way or the other, use one of the following algorithms:

  • Flip a coin (or some other external randomized choice), and base your decision on the result
  • Explicitly avoid the choice(s) you made last time
comment by MichaelVassar · 2009-03-23T11:53:01.035Z · LW(p) · GW(p)

Consistency effects seem to me to be the sort of error that more intelligent people might be MORE prone to, and is thus, it seems to me, particularly important to flag.

BTW, 3E seems to me to be by far the most important of the rationality suggestions given, largely because it actually seems practical.

Replies from: AndrewKemendo
comment by AndrewKemendo · 2009-03-23T17:37:07.565Z · LW(p) · GW(p)

The reference to your quote on OB and here about finding comfort in lying is confusing to me. Do you have a better explanation of what you mean because I am not understanding it at all.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-03-25T00:32:59.849Z · LW(p) · GW(p)

Andrew, suppose you're under strong social pressure to say X. The very thought of failing to affirm X, or of saying not-X, makes your stomach sink in fear -- you know how your allies would respond that that, and it'd be terrible. You'd be friendless and ostracized. Or you'd hurt the feelings of someone close to you. Or you'd be unable to get people to help you with this thing you really want to do. Or whatever.

Under those circumstances, if you're careful to always meticulously tell "your sincere beliefs", you may well find yourself rationalizing, and telling yourself you "sincerely believe" X, quite apart from the evidence for X. Instead of telling lie X to others only, you'll be liable to tell lie X to others and to yourself at once. You'll guard your mind against "thoughtcrime".

OTOH, if you leave yourself the possibility of saying X while believing Y, silently, inside your own mind, you'll be more able to think though X's truth or falsity honestly, because, when "what if X were true?" flashes through the corner of your mind, "I can't think that; everyone will hate me" won't follow as closely.

Replies from: steven0461
comment by steven0461 · 2009-03-25T03:18:06.929Z · LW(p) · GW(p)

Personally I enjoy heresies too much to be worried about being biased against them, but that still leaves the problem of others responding negatively. I'm sure there are situations where lies are the least bad solution, but where possible I'd rather get good at avoiding questions, answering ambiguously or with some related opinion that you actually hold and that you know they will like, and so on. In addition to the point about eroding the moral authority of the rationalist community I'm somewhat worried about majority-pleasing lies amplifying groupthink (emperor's new clothes, etc), liars being socially punished and ever after distrusted if found out, and false speech seeping into false beliefs through exactly the sort of mechanism described here (will we be tempted to believe consistently with past speech? does skill at lying to others translate into skill at lying to oneself? do there exist basic disgust responses against untruth that make rationality easier and that are eroded by emotional comfort with lying?) If nothing else I'd advise being clearly aware of it whenever you're in "black hat mode" (for example, imagine quote marks around key words).

comment by AnnaSalamon · 2009-03-23T02:34:50.648Z · LW(p) · GW(p)

Could more people please share data on how one of the above techniques, or some other technique for reducing consistency pressures, has actually helped their rationality? Or how such a technique has harmed their rationality, or has just been a waste of time? The techniques list is just a list of guesses, and while I'm planning on using more of them than I have been using... it would be nice to have even anecdotal data on what helps and doesn't help.

For example, many of you write anonymously; what effects do you notice from doing so?

Or what thoughts do you have regarding Michael Vassar's suggestion to practice lying?

Replies from: Virge, steven0461, CarlShulman, SoullessAutomaton
comment by Virge · 2009-03-24T01:27:25.819Z · LW(p) · GW(p)

Or what thoughts do you have regarding Michael Vassar's suggestion to practice lying?

(Reusing an old joke) Q: What's the difference between a creationist preacher and a rationalist? A: The rationalist knows when he's lying.

I'm having trouble resolving 2a and 3b.

2a. Hyper-vigilant honesty. Take care never to say anything but what is best supported by the evidence, aloud or to yourself, lest you come to believe it. 3b. Build emotional comfort with lying, so you won’t be tempted to rationalize your last week’s false claim, or your next week’s political convenience. Perhaps follow Michael Vassar’s suggestion to lie on purpose in some unimportant contexts.

I find myself rejecting 3b as a useful practice because:

  • What I think will be an unimportant and undetectable lie has a finite probability of being detected and considered important by someone whose confidence I value. See Entangled Truths, Contagious Lies

  • This post points out the dangers of self-delusion from motivated small lies e.g. "if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself." Is there any evidence to show that I'll be safer from my own lies if I deliberately tag them at the time I tell them?

  • Building rationalism as a movement to improve humanity doesn't need to be encumbered by accusations that the movement encourages dishonesty. Even though one might justify the practice of telling unimportant lies as a means to prevent a larger more problematic bias, advocating lies at any level is begging to be quote-mined and portrayed as fundamentally immoral.

  • The justification for 3b ("so you won’t be tempted to rationalize your last week’s false claim, or your next week’s political convenience.") doesn't work for me. I don't know if I'm different, but I find that I have far more respect for people (particularly politicians) who admit they were wrong.

Rather than practising being emotionally comfortable lying, I'd rather practise being comfortable with acknowledging fallibility.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-03-25T01:02:39.278Z · LW(p) · GW(p)

I'm having trouble resolving 2a and 3b.

I did preface my list with “I’m not recommending these, just putting them out there for consideration”. 2a and 3b contradict one another in the sense that one cannot fully practice both 2a and 3b; but each is worth considering. Also, many of us could do more of both 2a and 3b than we currently do -- we could be more careful to really only every tell ourselves what’s best supported by the evidence (rather than pleasant rationalizations), and to mostly only say this to others as well, while also making the option of lying more cognitively available.

Is there any evidence to show that I'll be safer from my own lies if I deliberately tag them at the time I tell them?

There’s good evidence that people paid $20 to lie were less likely to believe their lie than people paid a mere $1 to lie. And similarly in a variety of other studies: people under strong, visible external pressure to utter particular types of speech are less likely to later believe that speech. It’s plausible, though not obvious, that people who see themselves as intentionally manipulating others, as continually making up contradictory stories, etc. will also be less likely to take their own words as true.

Building rationalism as a movement to improve humanity doesn't need to be encumbered by accusations that the movement encourages dishonesty.

I agree this is a potential concern.

Rather than practising being emotionally comfortable lying, I'd rather practise being comfortable with acknowledging fallibility.

Vassar’s suggestion isn’t designed to help one avoid noticing one’s own past mistakes. That one really wouldn’t work for a rationalist. It’s designed to let you seriously consider ideas that others may disapprove of, while continuing to function in ordinary social environments, i.e. social environments that may demand lip service to said ideas. See my comment here.

comment by steven0461 · 2009-03-23T12:55:03.666Z · LW(p) · GW(p)

For example, many of you write anonymously; what effects do you notice from doing so?

I've noticed that when I'm anonymous, I avoid expressing most of the opinions I'd avoid expressing when I'm not anonymous, but because they might be seen to reflect badly on my other beliefs rather than on me. I should probably try a throwaway account, but among other things I'd still worry about how it might reflect on the community of rationalists in general.

I haven't noticed consistency pressures as such ever affecting me much, but I should watch more closely and the list of suggested solutions looks great.

Or what thoughts do you have regarding Michael Vassar's suggestion to practice lying?

Out of the question. Lying is illegal where I live.

Replies from: thomblake
comment by thomblake · 2009-04-02T14:43:18.365Z · LW(p) · GW(p)

Where is lying illegal? That sounds so terribly illiberal that I'd like to avoid even visiting there, if possible.

Replies from: AllanCrossman
comment by AllanCrossman · 2009-04-02T14:51:02.984Z · LW(p) · GW(p)

I think he was lying.

comment by CarlShulman · 2009-03-23T02:59:40.045Z · LW(p) · GW(p)

I use 2a when socializing with multiple polarized ideological camps, e.g. libertarians and social democrats. I can criticize particular fallacies and rationality errors of the other camp in terms of failure to conform to general principles, and when I do this I think of the ways in which my current interlocutors' camp also abuses those principles. I find that doing this helps me keep my reactions more level than if I mention errors in terms of idiosyncratic problems (e.g. specific interest groups associated with only one faction).

comment by SoullessAutomaton · 2009-03-23T03:09:48.781Z · LW(p) · GW(p)

For example, many of you write anonymously; what effects do you notice from doing so?

Within this community? Virtually none.

There is a difference between pseudonymity and anonymity. I may not attach my real name to posts here, but I would be deluding myself to think I could disregard external social pressures from within the communities where this handle is used. True anonymity is a very different beast.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-03-23T03:11:28.615Z · LW(p) · GW(p)

Thanks for the info. Have you tried writing with throw-away handles? Do you find you think differently under those circumstances?

Replies from: SoullessAutomaton, CarlShulman
comment by SoullessAutomaton · 2009-03-23T03:28:20.592Z · LW(p) · GW(p)

I am inclined to say that there's an increased tendency to regard dialogue as impersonal, as each truly anonymous post feels more like a one-shot contribution with no expectation of consistency or interaction, and a reduced tendency to identify directly with what you write. e.g., I may be more likely to post what I'm thinking in the moment without worrying about having to defend the position, or caring if I change my mind later. However, I strongly suspect there are confounding factors and I don't make a frequent habit of posting anonymously (Robin would probably suggest that I am too driven by status-seeking) so I can't speak terribly authoritatively on whether these impressions are accurate or if I'm repeating what I think "ought" to be the case.

For instance, one possible distracting issue is that group consensus seems to me more persuasive with increasingly anonymous discussions, and the resulting undercurrent of mob mentality presents an entirely different failure of rationality.

comment by CarlShulman · 2009-03-23T03:27:57.847Z · LW(p) · GW(p)

I've used throwaway handles to argue for views that I'm not convinced of, both to shake myself out of consistency pressures/confirmation bias and to elicit good criticism. I find that the latter is particularly helpful.

comment by B_Frank · 2009-03-24T03:12:42.525Z · LW(p) · GW(p)

How can you tell whether one's self might be getting hijacked or if it's getting rescued from a past hijacking?

E.g. I've been a long-time OB reader but took a couple of months off (part of a broader tactic to free myself of a possible RSS info addiction, and also to build some more connections with local people & issues via Twitter). I brought OB back into my daily reading list last week, read a few of Robin's posts and wondered where Eliezer was at...

Now I find myself here at LW, articulating thoughts to myself as I read and catch up, feeling impelled to comment... After a couple of hours I find I'm saying to myself that this is really great and I should rearrange my daily routine yet again -- maybe cut down on the Twitter use -- to spend regular time here becoming more rational.

So: hijacked by LW or rescued from Twitter? Are there any objective measures that could be used?

(I'm relating a personal experience but I don't want to give the impression I'm just looking for help with this particular situation. I'm wondering more generally.)

Replies from: JenniferRM
comment by JenniferRM · 2010-05-02T23:53:41.971Z · LW(p) · GW(p)

I think you've stated a major part of the solution at the end of your own post in the form of a question. That is: seek objective measures to help discriminate between options.

A lot of specific cognitive biases are related to framing effects where some idea becomes anchored in a person's mind and is subsequently adjusted until it feels "good enough". A nice way to cut through this, where feasible, is to seek some common numerical basis for discriminating between options and independently estimating their value. Investments and bets are the standard place for this.

If you don't want to drop down to dollars per minute or a similarly banal measure you could specify more abstract goals. For example, you could aim to "practice thinking clearly and at length" as one goal, but another might be "exposure to novel trends". Then compare each course of action against the goals and try to figure out what kind of "reading diet" would accomplish your specified need the best. Its distinctly possible that if you took this more systematic approach you'd realize some totally different plan would makes a lot more sense... certain kinds of learning might work better if you visit a library with a clever plan for consciously exploratory reading, but other goals might lead you to meetup.com to find local people you could relate to in a much more personally nourishing way.

In line with the original post, seeing yourself as someone who stops to attempt "objective valuations" before committing to a plan would probably be a useful "positive habit" to cultivate that might help to cut through some of the fog inherent to impulse driven behavior :-)

comment by Cameron_Taylor · 2009-03-23T13:21:00.993Z · LW(p) · GW(p)

In a follow-up, Jonathan Freedman used threats to convince 7- to 9-year old boys not to play with an attractive, battery-operated robot. He also told each boy that such play was “wrong”.

Along the lines of 3a, I consider the word 'wrong', and all variations, as if they were uttered at gunpoint. From my own observation the difference between the two is little more than degree. Describing (usually internally) such social influences in terms of raw power allows them to be navigated without having my self concept warped into undesirable patterns or riddled with undesired natural categories.

comment by SoullessAutomaton · 2009-03-23T02:57:55.235Z · LW(p) · GW(p)

This reminds me heavily of some studies I've read about pathological cases involving, e.g., split-brain patients or those with right-hemisphere injuries, wherein the patient will rationalize things they have no concious control over. For instance, the phenomenon of Anosognosia as mentioned in this Less Wrong post.

The most parsimonious hypothesis seems, to me at least, that long-term memory uses extremely lossy compression, recording only rough sketches of experience and action, and that causal relationships and motivations are actually reconstructed on the fly by the rationalization systems of the brain. This also fits well with this post about changing opinions on Overcoming Bias.

I think I have seen a similar argument made in a research paper on cognitive neuroscience or some related field, but I can't seem to find it.

As someone (Heinlein, I think?) said: "Man is not a rational animal, he is a rationalizing animal."

Replies from: pjeby
comment by pjeby · 2009-03-23T03:36:53.842Z · LW(p) · GW(p)

I think it's more accurate to say that memory is not for remembering things. Memory is for making predictions of the future, so our brains are not optimized for remembering exact sequences of events, only the historical probability of successive events, at varying levels of abstraction. (E.g. pattern X is followed by pattern Y 80% of the time).

This is pretty easy to see when you add in the fact that emotionally-significant events involving pleasure or pain are also more readily recalled; in a sense, they're given uneven weight in the probability distribution.

This simple probability-driven system is enough to drive most of our actions, while the verbal system is used mainly to rationalize our actions to ourselves and others. The only difference between us and split-brain or anosognosiacs, is that we don't rationalize our externalities as much.... but we still rationalize our own thoughts and actions, in order to perpetuate the idea that we DO control them. (When in fact, we mostly don't even control our thoughts, let alone our actions.)

Anyway, the prediction basis is why it's so hard to remember if you locked the door this time -- your brain really only cares if you usually lock the door, not whether you did it this time. (Unless there was also something unusual that happened when you locked the door this time.)

Replies from: SoullessAutomaton, GuySrinivasan
comment by SoullessAutomaton · 2009-03-23T11:19:07.171Z · LW(p) · GW(p)

You make an excellent point here, I think. It seems clear that we actually remember far less, and in less detail, than we think we do.

but we still rationalize our own thoughts and actions, in order to perpetuate the idea that we DO control them. (When in fact, we mostly don't even control our thoughts, let alone our actions.)

However, I'm not sure I agree with the connotations on "in order to perpetuate" here. The evidence seems to me to indicate that the rationalization systems are a subconciously automatic and necessary part of the human mind, to fill in the gaps of memory and experience.

The question is, as rationalists, what can we do to counteract the known failure modes of this system? The techniques outlined in the main post here are a good start, at least.

Replies from: Eliezer_Yudkowsky, Demosthenes
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-23T11:21:15.206Z · LW(p) · GW(p)

Not to mention that evolution is not going to design a system in order to make you feel like you are in control. That's more psychoanalytic than evolutionary-biological, I should think.

Replies from: pjeby
comment by pjeby · 2009-03-23T16:13:42.640Z · LW(p) · GW(p)

Sorry, I should've been clearer: it's in order to perpetuate this illusion to OTHER people, not to yourself. Using Robin's terms of "near" and "far", the function of verbal rationalization is to convince other people that you're actually making decisions based on the socially-shared "far" values, rather than your personal "near" values.

The fact that this rationalization also deludes you is only useful to evolution insofar as it makes you more convincing.

If language initially evolved as a command system, where hearing words triggered motor actions directly (and there is some evidence for this), then it's likely you'd end up with an arms race of people exploiting others verbally, and needing defenses against the exploits. Liars and persuaders, creating the need for disbelief, skepticism, and automatic awareness of self-interested (or ally-interested) reasoning.

But this entire "persuasion war" could (and very likely did) operate largely independent of the existing ("near") decision-making facilities. You'd only evolve a connection between the Savant and the Gossip (as I call the two intelligences) to the extent that you need near-mode information for the Gossip to do its job.

But... get this: part of the Gossip's arsenal is its social projection system... the ability to infer attitudes based on behavior. Simply self-applying that system, i.e., observing your own actions and experience, and then "rationalizing" them, gives you an illusion of personal consciousness and free will.

And it even gives you attribution error, simply because you have more data available about what was happening at the time the Savant actually made a decision.... even though the real reasons behind the Savant's choice may be completely opaque to you.

So, psychoanalysis gets repression completely wrong. As you say, evolution doesn't care what you think or feel. It's all about being able to "spin" things to others, and to do that, your Gossip operates on a need-to-know basis: it only bothers to ask the Savant for data that will help its socially-motivated reasoning.

Most of what I teach people to do -- and hell, much of what you teach people to do -- is basically about training the Gossip to ask the Savant better questions... and more importantly, getting it to actually pay attention to the answers, instead of confabulating its own.

Replies from: Emile
comment by Emile · 2009-03-23T17:22:39.868Z · LW(p) · GW(p)

Minor terminology quibble: I'm not very fond of the terms "savant" and "gossip", I can't really tell "which is which" :P

Savant = near mind, "subconscious mind", "horse brain"/"robot brain" (using your other terminology)

Gossip = far mind, "conscious mind", "monkey brain"

... though I also see the problem with re-using more accepted terms like "subconscious mind" - people already have a lot of ideas of what those mean, so starting with new terminology can work better.

Replies from: pjeby
comment by pjeby · 2009-03-23T18:56:32.504Z · LW(p) · GW(p)

I can't really tell "which is which"

Yeah, when I officially write these up, I'll describe them as characters... thereby reusing the Gossip's "character recognition" technology. ;-) That is, I'll tell a story or two illustrating their respective characters.

Or maybe I'll just borrow the story of Rain Man, since Dustin Hoffman played an autistic savant, and Tom Cruise played a very status-oriented (i.e. "gossipy") individual, and the story was about the Gossip learning to appreciate and pay attention to the Savant. ;-)

... though I also see the problem with re-using more accepted terms like "subconscious mind" - people already have a lot of ideas of what those mean, so starting with new terminology can work better.

Right, and the same thing goes for left/right brain, etc. What's more, terms like Savant and Gossip can retain their conceptual and functional meaning even as we improve our anatomical understanding of where these functions are located. Really, for purposes of using the brain, it doesn't ordinarily matter where each function is located, only that you be able to tell which ones you're using, so you can learn to use the ones that work for the kinds of thinking you want to do.

comment by Demosthenes · 2009-03-23T16:44:48.231Z · LW(p) · GW(p)

Has anyone brought up this study by Bruner and Potter (1964) before? I think it would relate to intertemporal beliefs and how we sometimes perceive them to be more sound than they really are:

http://www.ahs.uwaterloo.ca/~kin356/bpdemo.htm

In this demonstration, you will see nine different pictures. The pictures will get clearer and clearer. Make a guess as to what is being shown for each of the pictures, and write down your guess. Note the number of the picture where you were first able to recognize what was being shown. Then go backwards - press the "BACK" button on the browser - and see at which point you can no longer identify the picture. Are your "ascending" and "decending" points the same?

========

IF YOU HAVE TRIED THE STUDY:

Pictures of common objects, coming slowly into focus, were viewed by adult observers. Recognition was delayed when subjects first viewed the pictures out of focus. The greater or more prolonged the initial blur, the slower the eventual recognition. Interference may be accounted for partly by the difficulty of rejecting incorrect hypotheses based on substandard cues.

It would be interesting to think of your intertemporal frame of mind as discontinuous and running at 24 frames per second (like a film). Maybe your consciousness gives your sense of beliefs a false sense of flowing like a movie from one time state to the next.

comment by SarahSrinivasan (GuySrinivasan) · 2009-03-23T04:53:13.977Z · LW(p) · GW(p)

This idea feels very, very true to me, and I am surprised I can't remember seeing it before. Do you have any cites I should read to squash my "what if it's a just-so story" feelings?

Replies from: pjeby
comment by pjeby · 2009-03-23T15:49:32.827Z · LW(p) · GW(p)

Um, anything ever written about how human memory works? ;-) (I assume you're referring to the idea that "memory is not for remembering things". The idea is just my own way of making sense of the "flaws" in human memory... and realizing that they aren't flaws at all, from evolution's point of view.)

Replies from: steven0461
comment by steven0461 · 2009-03-23T15:52:00.991Z · LW(p) · GW(p)

It shouldn't theoretically be the case that false beliefs lead to better predictions than true beliefs, so I guess when memory doesn't optimize for accuracy, there has to be a different bias that it's canceling out?

(edited to add something that needs to be said from time to time: when I say "theoretically" I don't mean "according to the correct theory", but "according to a simple and salient theory that isn't exactly right")

Replies from: pjeby
comment by pjeby · 2009-03-23T16:27:56.101Z · LW(p) · GW(p)

False beliefs lead to better predictions if they keep you safe. The probability of being attacked by a crocodile at the riverbank might be low, but this doesn't mean you shouldn't act as if you're going to be attacked.

Perhaps I should have emphasized the part where the predictions are for the purpose of making decisions. Really, you could say that memory IS a decision-making system, or at least a decision-support database. What we store for later recall, and what we recall, are based on what evolutionarily "works", rather than on theoretically-correct probabilities. Evolution is a biased Bayesian, because some probabilities matter more than others.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-03-23T18:13:12.952Z · LW(p) · GW(p)

You may afford forgetting about the sky's color and may not afford forgetting about poisonous snakes, but that doesn't mean you should increase your probability estimate of encountering a poisonous snake, or that you should decrease the probability of the sky being blue. Some parts of the map are known to have different importance, but that doesn't make it a good idea to systematically distort the picture.

Replies from: pjeby
comment by pjeby · 2009-03-23T18:50:28.219Z · LW(p) · GW(p)

Er, what does "should" mean, here? My comments in this thread are about how brains actually work, not how we might prefer them to work.

Bear in mind that evolution doesn't get to do "should" - it does "what works now". If you have to evolve a working system, it's easier to start by using memory as a direct activation system. To consider probabilities in the way you seem to be describing, you have to have something that then evaluates those probabilities. It's a lot simpler to build a single mechanism that incorporates both the probabilities and the decision-making strategy, all rolled into one.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-03-23T20:10:50.808Z · LW(p) · GW(p)

Sure, but in this case you can't easily interpret that strange combined decision-making mechanism in terms of probabilities. Probabilities-utilities is a mathematical model that we understand, unlike the semantics of brain's workings. The model can be used explicitly to correct the intuitively drawn decisions, so it's a good idea to at least intuitively learn to interface between these modes.

In conclusion, the "should" refers to how you should strive to interpret your memory in terms of probabilities. If you know that in certain situations you are overvaluing the probabilities of events, you should try to correct for the bias. If your mind tells "often!", and you know that in situations like this your mind lies, then "often!" means rarely.

Replies from: pjeby
comment by pjeby · 2009-03-23T21:48:52.311Z · LW(p) · GW(p)

Probabilities-utilities is a mathematical model that we understand, unlike the semantics of brain's workings.

I prefer to work harder on understanding the brain's semantics, since we don't really have the option of replacing them at the moment.

In conclusion, the "should" refers to how you should strive to interpret your memory in terms of probabilities.

That makes it sound like I have a choice. In practice, only when I have time to reflect, do I have the option of "interpreting" my memory.

Under normal circumstances, we act in ways that are directly determined by the contents of our memories, without any intermediary. It's only the verbal rationalizations of the Gossip that make it sound like we could have chosen differently.

Thus, I benefit more from altering the memories that generate my actions, in order to produce the desired behaviors automatically.... instead of trying to run every experience in my life through a "rational" filtering process.

If your mind tells "often!", and you know that in situations like this your mind lies, then "often!" means rarely.

That's only relevant insofar as how it relates to my choice of actions. I don't care what "right" is - I care what the right thing to do is. So in that at least, I agree with my brain. ;-)

But my care is more for what goes in, and changing what's currently stored, than for trying to correct things on the fly as they come out. The way most of our biases manifest, they affect what goes into the cache, more than they affect what comes out. And that means we have the option of implementing "software patches" for the bugs the hardware introduces, instead of needing to do manual workarounds, or wait for a hardware upgrade capability.

comment by jooyous · 2013-01-21T21:56:47.304Z · LW(p) · GW(p)

Oh man. I already knew about this effect when I spent the summer in Atlanta. But I am not very good under social pressure, so when all the panhandling gentlemen recognized my clueless wandering around the streets and started demanding a dollar or bus fare, I felt like I had to give it to them. (I did this until I ran out of cash, and then I just said "Sorry! No cash! I need to catch the bus! Bye!") One guy asked for 80 cents for water from the vending machine, so I offered him my water. To which he promptly replied that he would like my water in addition to 80 cents because, "well, I live in a shelter." It's true! Kind of! I don't live in a shelter! Anyways, I am concerned that this experience has turned me more liberal than I would be otherwise. Even though I can sort of tell that I shouldn't be. Or rather, that it shouldn't be from this experience. Social pressure be damned!

So. How do I unbias myself? =]

comment by Cameron_Taylor · 2009-03-23T12:38:33.765Z · LW(p) · GW(p)

For example, if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself.

Courtesy reference: A Fable of Science and Politics describes a conflict between two fictional underground factions, who were in disagreement regarding the appearance of the sky, long lost to them.

comment by Johnicholas · 2009-03-23T04:33:37.906Z · LW(p) · GW(p)

This post seems to imply that the self-consistency bias is irrational, but it doesn't argue strongly that becoming more self-inconsistent leads to better outcomes. In fact, it hints that the self-consistency bias is strong and natural, which would suggest that it might be beneficial in the EEA.

For example, it may be more beneficial to consistently carry through plans than to switch to the best-appearing alternative at every step.

Another idea: Possibly the "incorporate small decisions into one's self-concept" is a way of seeking individuation; being an unusual person may be advantageous in some way.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-03-23T06:05:05.898Z · LW(p) · GW(p)

I agree that visible consistency can bring social benefits. I also agree that sticking with a single plan (which is a type of consistency) is in some cases preferable to constantly re-examining the plan, given the time costs and other costs involved in such examination.

That said, it is fairly clear that if you're aiming to have accurate beliefs, your past claims and actions aren't any more relevant than are the claims and actions of someone else with comparable rationality and domain knowledge. And it is fairly clear that most of us stick with our beliefs (vs. the beliefs of comparable others that we know) even in cases where our aim is truth, and that our tendency to hold onto our own past beliefs (vs. those of comparable others) impairs our access to truth. So I’d like to become, not “more self-inconsistent” in a generic sense, but more able to update my beliefs in useful directions -- and updating requires changing beliefs.

As to actions... you make a good point there that the benefits of consistency and inconsistency are not obvious and that I glossed over the question above. It’s a good question to highlight. To give some rough reasoning now: I and others seem, in my observation, to waste inordinate amounts of resource sticking with the particular action-patterns we’ve used in the past, instead of experimenting with the patterns others are using. We drive with whatever driving habits we picked up in our first two years, not withstanding any risk to our lives; we use whatever habits of social interaction we happen to have picked up, whether or not they help us learn from others and form good relationships; we go about our work according to particular patterns; we come up with self-justifying stories for why our past mistakes must have been optimal, and must the the right way to respond to our present choices. Identity, and self-consistency, are part of this story. It’s a part I’d like to reduce in myself, and in anyone whose epistemic accuracy and practical effectiveness I care about. But this is admittedly not a full analysis of the merits of self-consistency heuristics.

comment by robzahra · 2009-03-23T00:28:08.158Z · LW(p) · GW(p)

Agree with and like the post. Two related avenues for application:

  1. Using this effect to accelerate one's own behavior modification by making commitments in the direction of the type of person one wants to become. (e.g. donating even small amounts to SIAI to view oneself as rationally altruistic, speaking in favor of weight loss as a way to achieve weight loss goals, etc.). Obviously this would need to be used cautiously to avoid cementing sub-optimal goals.

  2. Memetics: Applying these techniques on others may help them adopt your goals without your needing to explicitly push them too hard. Again, caution and foresight advisable.

comment by dclayh · 2009-03-24T22:35:34.053Z · LW(p) · GW(p)

This effect is interesting from the rationalism perspective because it has three separate effects: (1) making us believe false things about our past mental states (or even false things about the world); (2) creating a disconnect between why we claim/believe we are saying or doing something, and why we actually are; and (3) the change in our behaviors/desires themselves. While (1) and (2) clearly represent decreases in rationality/sanity, what can we say about (3)? Don't we all believe Hume around here?

comment by PhilGoetz · 2009-03-23T05:10:14.237Z · LW(p) · GW(p)

These consistency effects are reminiscent of Yvain’s large, unnoticed priming effects -- except that they’re based on your actions rather than your sense-perceptions,

If they wanted to show that, they should have had a control group that observed other people taking those actions.

Observing yourself doing something is still observing it, and priming could still account for the results.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-03-23T05:13:26.568Z · LW(p) · GW(p)

Priming effects are supposed to last for minutes, not for the weeks that effects lasted in the petition experiment and the forbidden toy experiment.

comment by MichaelGR · 2009-03-23T02:16:41.170Z · LW(p) · GW(p)

I think this is a great post.

Really made me think about how this might apply to me, and I've already decided to make a few changes based on some of your suggestions (mostly in how I phrase things when describing myself).

As social creatures, I wonder if the effect is stronger when these consistency effects rise up in group situations. Does our brain try harder to stay consistent with an identity that makes us part of a group rather than an "individual" identity?

This certainly could explain a few things about how solid political/tribal/religious identity is, not to mention the most intense kinds of metal-heads, comic book geeks, browncoats, free software evangelists, etc (all types of hardcore fans that build their identity around what they happen to enjoy).

comment by A1987dM (army1987) · 2012-12-23T18:58:02.695Z · LW(p) · GW(p)

The unsettling part comes next; Freedman and Fraser wanted to know how apparently unrelated the consistency prompt could be. So, with a third group of homeowners, they had a “volunteer” for an ostensibly unrelated non-profit ask the homeowners to sign a petition to “keep America beautiful”. The petition was innocuous enough that nearly everyone signed it. And two weeks later, when the original guy came by with the big, ugly signs, nearly half of the homeowners said yes -- a significant boost above the 19% baseline rate. Notice that the “keep America beautiful” petition that prompted these effects was: (a) a tiny and un-memorable choice; (b) on an apparently unrelated issue (“keeping America beautiful” vs. “driving safely”); and (c) two weeks before the second “volunteer”’s sign request (so we are observing medium-term attitude change from a single, brief interaction).

I'm strongly tempted to defy the data.

comment by Tex · 2012-11-15T04:36:15.955Z · LW(p) · GW(p)

I am working on my own time to understand how to apply Value of Information Analysis to the projects I handle for an oil company, particularly regarding geoscientific matters of reservoir characterization.

comment by Liron · 2009-03-22T20:30:29.821Z · LW(p) · GW(p)

This is a really good post. I particularly like the suggestion that we don't have to infer and cache conclusions about ourselves when we screw up and don't return a library book. (Of course, other people would be rational to cache a conclusion about us because thinking differently wouldn't be a self-fulfilling prophecy.)

Replies from: Demosthenes, AnnaSalamon, gjm
comment by Demosthenes · 2009-03-22T22:57:23.903Z · LW(p) · GW(p)

This is the first post I've seen that seems to really fit Less Wrong's mission of "refining the art of human rationality"

This post clearly spells out some issues, links them to research and presents possible solutions. I hope that more posts in the future take this form.

This post also nicely outlines the problem of one's ability to really doubt themselves constantly at the appropriate level. I think these two points present a big challenge to the mission of leading a rational life:

3c. Reframe your past behavior as having occurred in a different context, and as not bearing on today’s decisions. Or add context cues to trick your brain into regarding today's decision as belonging to a different category than past decisions. This is, for example, part of how conversion experiences can help people change their behavior. (For a cheap hack, try traveling.)

3d. More specifically, visualize your life as something you just inherited from someone else; ignore sunk words as you would aspire to ignore sunk costs.

How someone could do enough compartmentalizing of their identity to pull of either of these tasks escapes me.

Replies from: topynate
comment by topynate · 2009-03-22T23:13:38.729Z · LW(p) · GW(p)

How someone could do enough compartmentalizing of their identity to pull of either of these tasks escapes me.

The motive behind these prescriptions is to make the decision we want to make for our current selves, so there's another way which non-rationalists use all the time. Suppose you make a New Year's Resolution to exercise more; you genuinely do want to exercise more. But when the equipment is installed in your living room, you don't feel like it any more. In fact, you'll end up convincing yourself that you were never really serious about your resolution in the first place, if you allow yourself to. I think that a person's past-self-concept does exert quite an influence on behaviour, but that current preferences can also alter the past-self-concept to fit. Consistency between past-self-concept and current self seems to be the overriding preference.

Of course this is a form of willing self-deception, so our overriding preference should be to actually do 3c and 3d, which are not self-deceptions, even if it does feel like compartmentalizing. I think one has to really convince oneself that such a perspective is not "compartmentalization"; that to disregard one's past preferences is not a betrayal of one's current self.

Replies from: Demosthenes
comment by Demosthenes · 2009-03-23T19:58:15.962Z · LW(p) · GW(p)

Has anyone brought up this study by Bruner and Potter (1964) before? I think it would relate to intertemporal beliefs and how we sometimes perceive them to be more sound than they really are:

http://www.ahs.uwaterloo.ca/~kin356/bpdemo.htm

In this demonstration, you will see nine different pictures. The pictures will get clearer and clearer. Make a guess as to what is being shown for each of the pictures, and write down your guess. Note the number of the picture where you were first able to recognize what was being shown. Then go backwards - press the "BACK" button on the browser - and see at which point you can no longer identify the picture. Are your "ascending" and "decending" points the same?

========

IF YOU HAVE TRIED THE STUDY:

Pictures of common objects, coming slowly into focus, were viewed by adult observers. Recognition was delayed when subjects first viewed the pictures out of focus. The greater or more prolonged the initial blur, the slower the eventual recognition. Interference may be accounted for partly by the difficulty of rejecting incorrect hypotheses based on substandard cues.

It would be interesting to think of your intertemporal frame of mind as discontinuous and running at 24 frames per second (like a film). Maybe your consciousness gives your sense of beliefs a false sense of flowing like a movie from one time state to the next.

comment by AnnaSalamon · 2009-03-22T20:45:15.980Z · LW(p) · GW(p)

Thanks. Steve re-invented the library books technique by thinking about refactoring code and categories... but examples like the library books example are also standard in Cognitive Behavioral Therapy. And CBT has other good techniques for becoming aware of your own thinking and for changing your thinking habits in useful ways. I really think we should be rifling through their thoughts for rationality training gimmicks.

Unfortunately, I don't know any good CBT resources to link my "I really think we should look at these people" claims to, and I especially know none online (I read a decent book on how to teach CBT in a bookstore but lost the name, and I read a decent summary at the end of a book on something else, in Martin Seligman's book "Absolute Happiness"). Does anyone else know what reading we might want to consult?

Replies from: Johnicholas, Emile
comment by Johnicholas · 2009-03-23T17:32:17.541Z · LW(p) · GW(p)

I found "Feeling Good" by David Burns to be helpful.

comment by Emile · 2009-03-23T17:30:01.069Z · LW(p) · GW(p)

I remember looking in a bookstore for good introductions on Cognitive Behavioral Therapy, and didn't find any; I ended up buying a report of a few case studies (that doesn't go too much in theory), but haven't read it yet. It's a useful reference to have, I just wish it was as accessible as say learning how to program (a topic for which you can find zillions of tutorials on the net).

Replies from: fortyeridania
comment by fortyeridania · 2010-07-21T15:44:24.821Z · LW(p) · GW(p)

I'm just getting into learning about CBT and its relatives. I'm in the middle of Cognitive Therapy: Basics and Beyond. Benefits: It seems pretty comprehensive and detailed, with plenty of "dialogs" between patient and therapist to illustrate the communication of various CBT concepts and techniques. Drawbacks: Because it's geared toward therapists, not patients, some of the information seems irrelevant for self-therapy, e.g. how to structure a session.

Part of the point of CBT is to prepare people to be their own therapists. It would be nice if anyone out there knew about literature specifically about self-therapy.

Replies from: None, Emile
comment by [deleted] · 2010-07-21T16:29:08.236Z · LW(p) · GW(p)

The classic self-help book about cognitive therapy is "Feeling Good: The New Mood Therapy" by David Burns. I've read it and consider its popularity well-deserved. It's focused on fighting depression but I think it should be useful even if you have a different purpose in mind.

comment by Emile · 2010-07-21T15:52:49.426Z · LW(p) · GW(p)

Heh, what you describe looks exactly like the book I have (though it's in French, so it's not the same book).

comment by gjm · 2009-03-23T08:25:53.872Z · LW(p) · GW(p)

This perhaps offers a partial explanation of the "fundamental attribution error".

comment by Mestroyer · 2012-06-22T12:54:06.806Z · LW(p) · GW(p)

I must be falling to the dark side because I read this and thought "so this is how I can convince people of things: give them a dollar to say they agree with me."

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-22T13:06:04.038Z · LW(p) · GW(p)

This works quite a lot worse than arranging the situation so people say they agree with me without an explicit quid pro quo.

For example, if Sam really wants Pat's approval, I can give Pat a dollar to say, in Sam's hearing, that they agree with me; Sam is relatively likely to say they do too, but may not explicitly be aware that they are doing so to secure Pat's good opinion, in which case Sam is far more likely to be convinced than Pat is.

Replies from: DaFranker
comment by DaFranker · 2012-07-25T16:18:08.774Z · LW(p) · GW(p)

With the full force of hindsight bias at work, it feels incredibly obvious (snicker) that this is the primary tactic used for maneuvering in highschool girl-clique drama.

comment by juliawise · 2011-07-25T19:11:26.044Z · LW(p) · GW(p)

3e sounds a lot like narrative therapy. If you're interested in that method, reading more about narrative therapy could help.

More evidence that social work and LW have many similar aims, and methods that can be used for both.

comment by Iabalka · 2011-08-26T08:15:46.837Z · LW(p) · GW(p)

Although in this post the authors have emphasised more on the hijacking when we say things I find it somehow related to the "cognitive costs of doing things" ( see http://lesswrong.com/lw/5in/the_cognitive_costs_to_doing_things/ ) When you do something you always pay a price. Maybe in the list of what to do could be added to be prepared and estimate the price you are going to pay when you are planning to do something.

Also beware of the "beauty bias" (see www.overcomingbias.com) if a handsome/beautiful person tells you something its seems you are more likely to agree with him/her.

comment by adsenanim · 2011-03-18T06:19:22.847Z · LW(p) · GW(p)

One thing I would point out is that the arguments presented here are a considerable effort into the examination of one’s own personal psyche, and that of the common psyche.

While it can be a definite benefit to examine this topic, I advise caution of moderation in the attempt.

I admire the authors own example in doing the equivalent: "I’m not recommending these, just putting them out there for consideration"

My main point of argument is that examination need not be experimentation, we can form hypothesis for consideration and not be burdened with the responsibility of an incorrect interaction.

I find the examples presented in this argument unnecessary (Freedman) if the examiner is capable of limited self examination.

In consideration of the main argument I would say that in my own experience it is possible to advise some of their own awareness without adhering to the presented guideline, that others may be of a nature above the need for any guideline, and, others yet may find completion in never knowing the presented guideline.

comment by aceofspades · 2012-11-11T00:42:53.730Z · LW(p) · GW(p)

By listing those "suggestions," you are causing people at least one person to try to use them even though they are in my judgment largely worthless or at least not worth the time and effort required to try to adopt them (this judgment means little compared to actual evidence of their relative effectiveness, but since I haven't seen any it will have to suffice as a prior). I have also seen no plausible argument here that this type of bias actually causes unhappiness, and so I therefore care nothing about it.

comment by BenRayfield · 2010-11-18T01:28:41.089Z · LW(p) · GW(p)

The cache problem is worst for language because its usually made entirely of cache. Most words/phrases are understood by example instead of reading a dictionary or thinking of your own definitions. I'll give an example of a phrase most people have an incorrect cache for. Then I'll try to cause your cache of that phrase to be updated by making you think about something relevant to the phrase which is not in most peoples' cache of it. Its something which, by definition, should be included but for other reasons will usually not be included.

"Affirmative action" means for certain categories including religion and race, those who tend to be discriminated against are given preference when the choices are approximately equal.

Most people have caches for common races and religions, especially about black people in USA because of the history of slavery in USA. Higher quantity of relevant events gets more cache. More cache makes it harder to define.

One who thinks one acts in affirmative action ways for religion would usually redefine "affirmative action" when they sneeze and instead of hearing "God bless you" they hear "Devil bless you. I hope you don't discriminate against devil worshippers." Usually the definition is updated to end with "except for devil worshippers" and/or an exclusion is added to the cache. Then, one may consider previous incorrect uses of the phrase "affirmative action". The cache did not mean what they thought it meant.

We should distrust all language until we convert it from cache to definitions.

Language usually is not verified and stays as cache. It appears to be low pressure because no pressure is remembered. Its expected to always be cache. Its experienced as high pressure when one chooses a different definition. High pressure is what causes us to reevaluate our beliefs, and with language, reevaluating our beliefs leads to high pressure. With language, neither of those things tends to be first so neither happens usually. Many things are that way but it applies to language the most.

Example of changing cache to definition resulting in high pressure to change back to cache: Using the same words for both sides of a war regardless of which side your country is on can be the result of defining those words. A common belief is soldiers should be respected and enemy combatants deserve what they get. Language is full of stateful words like those. If you think in stateful words, then the cost of learning is multiplied by the number of states at each branch in your thinking. If you don't convert cache to definition (to verify later caches of the same idea), then such trees of assumptions and contexts are not verified, which merge with other such trees and form a tangled mess of exceptions to every rule which eventually prevents you from defining something based on those caches. That's why most people think its impossible to have no contradictions in your mind, which is why they choose to believe new things which they know have unsolvable contradictions.