Simultaneously Right and Wrong

post by Scott Alexander (Yvain) · 2009-03-07T22:55:33.476Z · LW · GW · Legacy · 63 comments

Contents

63 comments

Related to: Belief in Belief, Convenient Overconfidence

     "You've no idea of what a poor opinion I have of myself, and how little I deserve it."

      -- W.S. Gilbert 

In 1978, Steven Berglas and Edward Jones performed a study on voluntary use of performance inhibiting drugs. They asked subjects to solve certain problems. The control group received simple problems, the experimental group impossible problems. The researchers then told all subjects they'd solved the problems successfully, leaving the controls confident in their own abilities and the experimental group privately aware they'd just made a very lucky guess.

Then they offered the subjects a choice of two drugs to test. One drug supposedly enhanced performance, the other supposedly handicapped it.

There's a cut here in case you want to predict what happened.


Males in the lucky guesser group chose the performance-inhibiting drug significantly more than those in the control group1.

The researchers conjectured that the lucky guessers felt good about acing the first test. They anticipated failing the second whether they chose the helpful or the harmful drug. So they chose the harmful drug as an excuse: "Oh, I would have passed the test, only the drug was making me stupid." As the study points out, this is a win-win situation: if they fail, the drug excuses their failure, and if they succeed it's doubly impressive that they passed even with a handicap.

Since this study, psychologists have applied self-handicapping to interpret common phenomena like procrastination, hypochondria, and minority underachievement; they've also discovered factors that increase or decrease self-handicapping tendencies (for example, you can increase it by making the test more relevant to your self-esteem: just say "this is a proven test of intellectual ability in general").

But some especially interesting studies investigated the effect of privacy on self-handicapping. For example, Hobden conducted an experiment similar to Berglas and Jones', albeit with tapes of performance-enhancing or -handicapping music instead of drugs. The twist: half the subjects' choice of tapes and test scores were public, and half the subjects believed their choice of tapes and test scores were anonymous. What happens when no one but the subject himself will ever know his test score? He self-handicaps just as often as everyone else. And it seems to *work*. The same set of studies showed that subjects who self-handicap on a test are less likely to attribute their failure on the test to their own incompetence.

In order to handicap, subjects must have an inaccurate assessment of their own abilities. Otherwise, there's no self-esteem to protect. If I believe my IQ is 80, and I get 80 on an IQ test, I have no incentive to make excuses to myself, or to try to explain away the results. The only time I would want to explain away the results as based on some external factor was if I'd been going around thinking my real IQ was 100.

But subjects also must have an accurate assessment of their own abilities. Subjects who take an easy pre-test and expect an easy test do not self-handicap. Only subjects who understand their low chances of success can think "I will probably fail this test, so I will need an excuse2

If this sounds familiar, it's because it's another form of the dragon problem from Belief in Belief. The believer says there is a dragon in his garage, but expects all attempts to detect the dragon's presence to fail. Eliezer writes: "The claimant must have an accurate model of the situation somewhere in his mind, because he can anticipate, in advance, exactly which experimental results he'll need to excuse." 

Should we say that the subject believes he will get an 80, but believes in believing that he will get a 100? This doesn't quite capture the spirit of the situation. Classic belief in belief seems to involve value judgments and complex belief systems, but self-handicapping seems more like simple overconfidence bias3. Is there any other evidence that overconfidence has a belief-in-belief aspect to it?

Last November, Robin described a study where subjects were less overconfident if asked to predict their performance on tasks they will actually be expected to complete. He ended by noting that "It is almost as if we at some level realize that our overconfidence is unrealistic."

Belief in belief in religious faith and self-confidence seem to be two areas in which we can be simultaneously right and wrong: expressing a biased position on a superficial level while holding an accurate position on a deeper level. The specifics are different in each case, but perhaps the same general mechanism may underlie both. How many other biases use this same mechanism?

Footnotes

1: In most studies on this effect, it's most commonly observed among males. The reasons are too complicated and controversial to be discussed in this post, but are left as an exercise for the reader with a background in evolutionary psychology.

2: Compare the ideal Bayesian, for whom expected future expectation is always the same as the current expectation, and investors in an ideal stock market, who must always expect a stock's price tomorrow to be on average the same as its price today - to this poor creature, who accurately predicts that he will lower his estimate of his intelligence after taking the test, but who doesn't use that prediction to change his pre-test estimates.

3: I have seen "overconfidence bias" used in two different ways: to mean poor calibration on guesses (ie predictions made with 99% certainty that are only right 70% of the time) and to mean the tendency to overestimate one's own good qualities and chance of success. I am using the latter definition here to remain consistent with the common usage on Overcoming Bias; other people may call this same error "optimism bias".

63 comments

Comments sorted by top scores.

comment by Nebu · 2009-03-09T16:26:29.733Z · LW(p) · GW(p)

In order to handicap, subjects must have an inaccurate assessment of their own abilities. Otherwise, there's no self-esteem to protect. If I believe my IQ is 80, and I get 80 on an IQ test, I have no incentive to make excuses to myself, or to try to explain away the results. The only time I would want to explain away the results as based on some external factor was if I'd been going around thinking my real IQ was 100.

I used to be pretty good at this videogame called Dance Dance Revolution (or DDR for short). I've won several province-level tournaments (both in my own province and in neighboring tournaments), did official internet rankings and ranked 10th place in North America, and 95th world wide.

People would often ask to play a match against me, and I'd always accept (figuring it was the "polite" thing to do), though I had mixed feelings about it. I very quickly realized it was a losing proposition for me: If I won, nobody noticed or remarked upon it (because I was known to be the "best" in my area), but I figured if I ever lost, people would make a big deal about it.

I often self-handicapped myself. I claimed that this was to make the match more interesting (and I often won despite self-handicap), but sometimes I wondered if perhaps I was also preparing excuses for myself so that if I ever did lose, I could blame the handicaps (and probably do so accurately, since I truly believe I could have beaten them in a "fair" match).

I had the fortune of traveling to Japan and a DDR player named Aaron who had ranked top 3 worldwide. He agreed to play a match with me, and I won the match, but it was very obvious to both of us that I had only won because of a glitch in the machine (basically, the game had unexpected froze and locked up, something I had never seen before, but when the game unfroze, I had been lucky and anticipated this before Aaron had).

So after the match, I turned to him, pulled out my digital camera and jokingly said "I can't believe I actually beat you. I gotta get a picture of this." But he had a rather serious look on his face and said something like "No, no pictures." I was a bit surprised, but I put away my camera. We didn't talk about it, but I suspected that I understood how he felt. I often felt like my reputation as the best DDR player in my province was constantly under attack. I figured he felt the same way, except world-wide, instead of provincially.

Replies from: Yosarian2
comment by Yosarian2 · 2012-12-30T19:11:23.310Z · LW(p) · GW(p)

It's not necessarily an excuse for failure.

If, on some level, you are looking to demonstrate fitness (perhaps as a signaling methods to potential mates), then if you visibly handicap yourself and STILL win, you have demonstrated MORE fitness then if you had won normally. If you expect to win even with the self-handicap, then it's not just a matter of making excuses.

I think this is similar to how very often a chess master when playing against a weaker player will "give them rook odds", start with only one rook instead of two. They still expect to win, but they know that if they can still win in that circumstance, then they have demonstrated what a strong player you are.

Replies from: Nebu
comment by Nebu · 2012-12-31T00:24:20.392Z · LW(p) · GW(p)

Coincidentally, I just saw this article which mentions self-handicapping: http://dsc.discovery.com/news/2008/10/09/puppies-play.html

comment by roland · 2009-03-08T02:32:16.992Z · LW(p) · GW(p)

This reminded me of Carol Dweck's study: http://news-service.stanford.edu/news/2007/february7/dweck-020707.html

It is about having a fixed vs. growth theory of intelligence. If you think that your intelligence is fixed, you will avoid challenging tasks in order to preserve your self-image, whereas people with growth mentality will embrace it in order to improve. Important: never tell a child that it is intelligent.

Replies from: pdf23ds, WrongBot
comment by pdf23ds · 2009-03-09T04:30:35.737Z · LW(p) · GW(p)

I think it's more like "never praise a child for being intelligent". You can tell them they're smart if they are, just don't do it often or put any importance on it.

comment by WrongBot · 2010-06-29T19:48:09.084Z · LW(p) · GW(p)

While it was well-intentioned, this is by far the worst thing my parents did while raising me. Even now that I'm aware of the problem, it's a constant struggle to convince myself to approach difficult problems, even though I find working on them very satisfying. Does anyone know if there's been discussion here (or elsewhere, I suppose) about individual causes of akrasia? Childhood indoctrination into a particular theory of intelligence certainly seems to be one.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-29T20:57:07.669Z · LW(p) · GW(p)

Not a direct answer, but your post reminds me of This Is Why I'll Never Be an Adult.

Note that the downward spiral starts with self-congratulation, which seems to be a part of my pattern.

Replies from: WrongBot
comment by WrongBot · 2010-06-29T21:02:31.455Z · LW(p) · GW(p)

Great link. I follow that pattern almost precisely, unfortunately. I'll have to spend some time analyzing my self-congratulatory habits and see what can be done.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-29T21:09:39.164Z · LW(p) · GW(p)

I don't have a cite, but I've read an article (a book? The Now Habit?) which claimed that procrastination is driven by the belief that getting things done is a reflection on your value as a person.

And why is akrasia a common problem among LessWrongians rather than, say, high-energy impulsiveness?

Replies from: mattnewport
comment by mattnewport · 2010-06-29T21:11:28.333Z · LW(p) · GW(p)

I imagine akrasia is a more natural fit for a tendency to overthink things.

comment by SarahNibs (GuySrinivasan) · 2009-03-08T00:18:04.971Z · LW(p) · GW(p)

My first reaction is that the 80-IQ guy needs to carry around a mental model of himself as a 100-IQ guy for status purposes, and a mental model of himself as an 80-IQ guy for accuracy purposes. Possibly neither consciously.

(Is this availability bias at work because I have recently read lots of Robin's etc. writings on status?)

If true, I don't think there's any need to say he "believes" his IQ is 100 when it is in fact 80. We could just say he has at least one public persona which he'd like to signal has an IQ of 100, and that sometimes he draws predictions using this model rather than a more correct one, like when he's guaranteed privacy.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-03-08T01:10:57.045Z · LW(p) · GW(p)

I agree with your first paragraph, but I don't quite understand your second.

In particular, I don't understand what you mean by there being no need to say he "believes". If upon being asked he would assert that his IQ is 100, and he wouldn't be consciously aware of lying, isn't that enough to say he believes his IQ is 100 on at least one level?

(also, when I say I agree with your first paragraph, I do so on the assumption that we mean the same thing by status. In particular, I would describe the "status" in this case as closer to "self-esteem" than "real position in a social hierarchy". Are most Less Wrong readers already aware of the theory that self-esteem is the way the calculation of status feels from the inside, or is that worth another post?)

Replies from: CronoDAS, cousin_it, Eliezer_Yudkowsky, Cameron_Taylor, pwno
comment by CronoDAS · 2009-03-08T04:25:09.199Z · LW(p) · GW(p)

Yes, it's worth another post - I hadn't heard that theory before.

::runs off to do some Google searches::

Some difficult work with Google revealed that the technical term is the "sociometer" theory - and it's fairly recent (the oldest citation I see refers to 1995), which would help explain why I hadn't heard of it before. It seems consistent with my personal experiences, so I consider it credible.

For more information:

http://www.psychwiki.com/wiki/Sociometer_Theory

Replies from: Yvain, Cameron_Taylor
comment by Scott Alexander (Yvain) · 2009-03-08T20:15:45.365Z · LW(p) · GW(p)

Okay, I'll definitely post on sociometer theory sometime.

comment by Cameron_Taylor · 2009-03-08T15:50:02.456Z · LW(p) · GW(p)

Thanks for the link CronoDAS. The 'sociometer' theory does seem credible, and certainly more so than some of the alternative theories presented there.

What I am not comfortable with is the emphasis placed on minimising the possibility of rejection from the tribe as a terminal value, to the exclusion of the other benefits of status. While explusion from a tribe can lead to physical death or at least genetic extinction, it is hardly the only benefit of high status. Surely a sensitive sociometer serves a goal somewhat more naunced than minimising this one negative outcome!

comment by cousin_it · 2011-05-16T12:35:39.894Z · LW(p) · GW(p)

Are most Less Wrong readers already aware of the theory that self-esteem is the way the calculation of status feels from the inside, or is that worth another post?

Why did I only stumble across this sentence two years after you wrote it?! It would've come in handy in the meanwhile, you know =) It will definitely come in handy now. Thanks!

Replies from: wedrifid
comment by wedrifid · 2011-05-16T12:55:36.108Z · LW(p) · GW(p)

Did Yvain end up writing said post? That theory is approximately how I model self-esteem and it serves me well but I haven't seen what a formal theory on the subject looks like.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2011-05-16T12:59:15.988Z · LW(p) · GW(p)

http://lesswrong.com/lw/1kr/that_other_kind_of_status/ involves that idea; for the formal theory, Google "sociometer".

Replies from: wedrifid
comment by wedrifid · 2011-05-16T13:33:18.567Z · LW(p) · GW(p)

sociometer

Thanks!

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-08T01:29:18.168Z · LW(p) · GW(p)

If you've got more to say about it than that one line and you think it's possibly important, I'd call it another post.

comment by Cameron_Taylor · 2009-03-08T04:04:46.729Z · LW(p) · GW(p)

Are most Less Wrong readers already aware of the theory that self-esteem is the way the calculation of status feels from the inside, or is that worth another post?

I'm aware of the theory, however I've mostly picked it up from popular culture. I'd appreciate a post that described an actual scientific theory, with evidence or at least some falsifiability.

Replies from: pjeby
comment by pjeby · 2009-03-09T22:34:09.324Z · LW(p) · GW(p)

Self-esteem is another one of those null concepts like "fear of success". In my own work, for example, I've identified at least 2 (and maybe three) distinct mental processes by which behaviors described as "low self-esteem" can be produced.

One of the two could be thought of as "status-based", but the actual mechanism seems more like comparison of behaviors and traits to valued (or devalued) behavioral examples. (For instance, you get called a crybaby and laughed at -- and thus you learn that crying makes you a baby, and to be a "man" you must be "tough".)

The other mechanism is based on the ability to evoke positive responses from others, and the behaviors one learns in order to evoke those responses. Which I suppose can also be thought of as status-based, too, but it's very different in its operation. Response evocation motivates you to try different behaviors and imprint on ones that work, whereas role-judgment makes you try to conceal your less desirable behaviors and the negative identity associated with them. (Or, it motivates you to imitate and display admired traits and behaviors.)

Anyway, my main point was just to support your comments about evidence and falsifiability: rationalists should avoid throwing around high-level psychological terms like "procrastination" and "self-esteem" that don't define a mechanism -- they're usually far too overloaded and abstract to be useful, ala "phlogiston". If you want to be able to predict (or engineer!) esteem, you need to know more than that it contains a "status-ative principle". ;-)

Replies from: Peterdjones
comment by Peterdjones · 2012-09-28T11:15:20.116Z · LW(p) · GW(p)

Oddly enough, I found that too abstract to follow.

comment by pwno · 2009-03-08T01:53:02.553Z · LW(p) · GW(p)

Are most Less Wrong readers already aware of the theory that self-esteem is the way the calculation of status feels from the inside, or is that worth another post?

I wasn't aware, but it makes a lot of sense. Especially because you perception of yourself is a self-fulfilling prophecy.

Imagine a room of 100 people where none of them have any symbols pre-validated to signal for status. Upon interacting over time, I would guess that the high self-esteem people would most likely be perceived as high status.

comment by conchis · 2009-03-08T01:22:36.561Z · LW(p) · GW(p)

"If I believe my IQ is 80, and I get 80 on an IQ test, I have no incentive to make excuses to myself, or to try to explain away the results."

Really? I think it's pretty common to be (a) not particularly good at something, (b) aware you're not particularly good at it, and (c) nonetheless not want that fact rubbed in your face if rubbing is avoidable. (Not saying this is necessarily a good thing, but I do think it's pretty common.)

comment by nolrai · 2010-02-17T19:17:24.943Z · LW(p) · GW(p)

I really wonder how this sort of result applies to cultures that don't expect everyone to have high self-esteem. Such as say japan.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-07T23:51:29.172Z · LW(p) · GW(p)

Excellent post - it makes me wish that the system gave out a limited number of super-votes, like 1 for every 20 karma, so that I could vote this up twice.

I hope you don't mind, but I did a quick edit to insert "a choice of" before "two drugs to test", because that wasn't clear on my first reading. (Feel free to revert if you prefer your original wording.) Also edited the self-deception tag to self_deception per previous standard.

Replies from: Yvain, thomblake, PaulG, roland
comment by Scott Alexander (Yvain) · 2009-03-08T01:28:28.073Z · LW(p) · GW(p)

Thank you. Since I learned practically everything I know about rationality either from you or from books you recommended, I'm very happy to earn your approval...but also a little amused, since I consciously tried to copy your writing style as much as I could without actually inserting litanies.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-08T02:02:35.007Z · LW(p) · GW(p)

Heh! I almost wrote in my original comment: "How odd, an Eliezer post on Standard Biases written by Yvain", but worried that it might look like stealing credit, or that you might not like the comparison. I futzed around, deleted, and finally wrote "excellent post" instead. The wish for two upvotes is because my Standard Biases posts are the ones I feel least guilty about writing.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-03-08T04:00:46.268Z · LW(p) · GW(p)

I must admit that when I wrote my reply I was operating on the assumption that I was replying to Eliezer. In fact, I even adressed it as such.

Fortunately I checked to see that nobody else had written the same point I was making before I posted.

Brilliant work Yvain!

comment by thomblake · 2009-03-08T01:19:48.908Z · LW(p) · GW(p)

Surely you have enough of a following here that you effectively have super-votes? Just go ahead and tell people you're voting something up, and that should generate at least two or three votes for free.

Also, 'promoting' an article seems to be a good enough option.

comment by PaulG · 2009-03-08T00:32:03.971Z · LW(p) · GW(p)

The idea of super-votes sounds similar to the system they have at everything2, where users are awarded a certain number of "upvotes" and a certain number of "cools" every day, depending on their level. An upvote/downvote adds/subtracts 1 point to their equivalent of karma for the post and a Cool gives the player a certain number of points, is displayed as "Cooled" on the post and is promoted to their main page.

(I reposted this as a reply because I was unfamiliar with the posting system when I first wrote it.)

Replies from: None, Cameron_Taylor
comment by [deleted] · 2009-03-08T06:22:40.933Z · LW(p) · GW(p)

Note that the karma system for Everything2 has changed recently. Specifically, because of abuse, downvoting no long subtracts karma.

'Cools' add twenty karma now. In the past, they only added three or so. This was changed to reflect the comparative scarcity of cools. Where in the old system, highly ranked users could cool multiple things per day, in the new system everyone is limited to one per day.

Their rationalization for these changes are listed here. I hope this information proves a bit useful to other people designing karma systems; at E2, we've been experimenting with karma systems since 1999. It'd be a shame to have that go to waste.

comment by Cameron_Taylor · 2009-03-08T04:08:50.303Z · LW(p) · GW(p)

I like the sound of that system PaulG. I like the idea that I have to 'spend' a finite resource to vote something up or down. Having a finite number of supervotes or cools would make me consider my voting more thoughtfully.

comment by roland · 2009-03-08T01:46:08.698Z · LW(p) · GW(p)

I second the idea of super-votes. IMHO you(EY) should be allowed to super-vote how much you want since I trust your judgement as do most of the others, I suppose.

Replies from: Eliezer_Yudkowsky, Roko, Cameron_Taylor
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-08T02:00:12.183Z · LW(p) · GW(p)

I respectfully disagree with the latter part.

comment by Roko · 2009-03-08T15:37:53.479Z · LW(p) · GW(p)

"you(EY) should be allowed to super-vote how much you want since I trust your judgement as do most of the others, I suppose."

  • what have we learned about power corrupting people? EY did a lovely post ( http://www.overcomingbias.com/2007/12/every-cause-wan.html ) on how any human group innately tends towards the standard hunter-gatherer tribe habits; in this case the tendency being to elevate the mere human tribal chief to godlike status...

/"You might think that a belief system which praised "reason" and "rationality" and "individualism" would have gained some kind of special immunity, somehow...?"/

Also, I wonder how much Yvain's rather high score of 31 has to do with EY's good review. "jeez, Eli said it was good, I'd better vote for it..."

Replies from: MichaelHoward, MichaelHoward
comment by MichaelHoward · 2009-03-08T16:00:33.393Z · LW(p) · GW(p)

I wonder how much Yvain's rather high score of 31 has to do with EY's good review.

I've noticed this too, and that comments by higher status users, particularly Eliezer, tend to be voted higher than IMHO equal quality comments by less popular users...

power corrupting people?

...but it's hardly Eliezer's fault, if anything he goes out of his way to discourage this sort of thing.

It could also be a kind of unconscious Bayesian adjustment. If a comment is written by someone who tends to write high-quality comments, that increases the probability that this comment is high-quality from what you'd estimate just from reading the text. But I'd rather we didn't take that into account - we should mark comments based on own opinion of whether it's high quality, not our estimate of the probability of it being high-quality based on info like that, or the voting would resemble that of a Keynesian beauty contest.

Replies from: Yvain, Roko
comment by Scott Alexander (Yvain) · 2009-03-09T16:45:58.727Z · LW(p) · GW(p)

A dream feature, not something I seriously expect the people at Tricycle to work on: I want an option for a voluntary "blind mode" in preferences. People in blind mode wouldn't be able to see the poster of a comment's name or the comment's current karma score until they either voted up, voted down, or clicked a new "vote neutral" button; after voting, the poster and karma score would be revealed but the vote could not be changed.

Reason: I find myself slightly tempted to vote up the articles of people who voted up my articles as a form of reciprocity, or else to vote up the articles of people who didn't vote up my articles to prove I'm not doing that. I'm sure on an unconscious level the temptation is much worse. Plus this would solve the information cascades problem.

Replies from: matt, MichaelHoward
comment by MichaelHoward · 2009-03-09T18:24:43.134Z · LW(p) · GW(p)

How do you know who voted up your articles?

Replies from: thomblake
comment by thomblake · 2009-03-09T22:17:03.965Z · LW(p) · GW(p)

The default setting is that votes are public - you can check a user's profile page to see what he liked/disliked.

comment by Roko · 2009-03-08T17:05:33.132Z · LW(p) · GW(p)

"...but it's hardly Eliezer's fault, if anything he goes out of his way to discourage this sort of thing."

Yes, this is true. If he hadn't written the "every cause wants to be a cult" post, I would probably also be busy requesting that he give himself absolute power. My comment was more aimed at roland.

comment by MichaelHoward · 2009-03-08T19:25:06.487Z · LW(p) · GW(p)

Fortunately there's still plenty of people who overcome this bias. I've the dubious honour of writing the only post Eliezer's actually condemned as inappropriate for Less Wrong and should never have existed. I was afraid that would trigger a downgrade avalanche, but thanks to some excellent links others posted it maintained a positive score.

comment by Cameron_Taylor · 2009-03-08T05:06:46.442Z · LW(p) · GW(p)

In as much as I trust Eleizer's judgement, I'm not sure I would want him to be taken above and beyond the vote system. Far better to have Eleizer's implicit awesomeness and right to judge emerge from within the same well designed system.

comment by infotropism · 2009-03-08T00:00:28.799Z · LW(p) · GW(p)

Looks like it's related to learned helplessness to me.

http://en.wikipedia.org/wiki/Learned_helplessness

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-03-08T01:53:36.975Z · LW(p) · GW(p)

The relationship discussed in the literature mostly involves them as two competing explanations for underachievement. Learned helplessness is about internalizing the conception of yourself as worthless; self-handicapping is about trying as hard as you can to avoid viewing yourself as worthless. The studies I could find in ten minutes on Google Scholar mostly suggested a current consensus that run-of-the-mill underachievers are sometimes self-handicappers but not learned-helplessness victims - but ten minutes does not a literature review make.

Oh, and thank you for linking to that Wikipedia article. The sentence about how "people performed mental tasks in the presence of distracting noise...if the person could use a switch to turn off the noise, his performance improved, even though he rarely bothered to turn off the noise. Simply being aware of this option was enough to substantially counteract its distracting effect" is really, really interesting.

comment by CarlShulman · 2009-03-08T00:06:10.466Z · LW(p) · GW(p)

If self-handicapping to preserve your self-image with respect to one thing impairs your performance in many situations, then one approach would be to do some very rigorous testing, e.g. if one is concerned about psychometric intelligence, one could take several psychologist-administered WAIS-IV IQ tests on different days.

"Belief in belief in religious faith and self-confidence seem to be two areas in which we can be simultaneously right and wrong: expressing a biased position on a superficial level while holding an accurate position on a deeper level."

This is also relevant for Caplan's model of rational irrationality in political beliefs.

comment by PaulG · 2009-03-08T00:01:51.424Z · LW(p) · GW(p)

Eliezer: Super-votes is kinda like the system of "Cools" vs "upvotes" on everything2 (http://everything2.com/), where depending on your participation (they have a levels system), you are given a certain number of "Cools" and a certain number of "upvotes". Cools give more points to the user and put the article on the front page for a limited amount of time, upvotes just give the user a point or something.

comment by AdMysterium · 2022-05-25T21:35:33.716Z · LW(p) · GW(p)

"In most studies on this effect, it's most commonly observed among males. The reasons are too complicated and controversial to be discussed in this post, but are left as an exercise for the reader with a background in evolutionary psychology"
 

would anyone like to discuss the reasons, thanks for being ambiguous! Appreciate it! 

comment by JJ10DMAN · 2010-08-10T11:00:56.649Z · LW(p) · GW(p)

Last November, Robin described a study where subjects were less overconfident if asked to predict their performance on tasks they will actually be expected to complete. He ended by noting that "It is almost as if we at some level realize that our overconfidence is unrealistic."

I think there's a less perplexing answer: that at some level we realize that our performance is not 100% reliable, and we should shift down our estimate by an intuitive standard deviation of sorts. That way, we can under-perform in this specific case, and won't have to deal with the group dynamics of someone else's horrible disappointment because they were counting on you doing your part as well as you said you could.

Replies from: orthonormal
comment by orthonormal · 2010-08-10T23:27:16.018Z · LW(p) · GW(p)

First, welcome to Less Wrong! Be sure to hit the welcome thread soon.

Doesn't your hypothesis here predict compensation for overconfidence in every situation, and not just for easy tasks?

Replies from: JJ10DMAN
comment by JJ10DMAN · 2010-10-15T13:31:58.399Z · LW(p) · GW(p)

Yes it does.

...

Is there some implication I'm not getting here?

Replies from: orthonormal
comment by orthonormal · 2010-10-17T21:52:51.874Z · LW(p) · GW(p)

Um, I don't actually remember now– I thought that one of the results was that people compensated more for overconfidence when the tasks were not too difficult. But I don't see that, looking it over now.

comment by talisman · 2009-03-09T04:05:56.689Z · LW(p) · GW(p)

No idea the extent to which EY's approval upped this, but what I can say is that I was less than half through the post before I jumped to the bottom, voted Up, and looked for any other way to indicate approval.

It's immediately surprising, interesting, obvious-in-retrospect-only, and most importantly, relevant to everyday life. Superb.

comment by Cameron_Taylor · 2009-03-08T04:37:40.142Z · LW(p) · GW(p)

The ideal Bayesian [can] never predict in which direction future information will alter his own estimates, and investors in an ideal stock market, [can] never predict in which direction prices will move

I suggest rewording this, it seems like you are making a different claim than the one you intended. An ideal Bayesian can predict in which direction future information will alter his own estimates.

I have been given a coin which I know is either fair or biased (comes up heads 75% of the time). After a sequence of tosses I have arrived at, say, 95% probability that the coin is biased. The probability I assign to the next toss giving 'heads' is:

p(heads) = 0.95 0.75 + 0.05 0.5 ~= 0.74

There is a 74% chance that I will alter my estimate upwards after this coin toss.

I predict with 95% confidence that should I continue to toss the coin long enough future information will alter my estimates upwads until it reaches ~100% confidence that the coin is biased. Naturally, I predict a 5% chance that my estimates would be eventually altered downwards until they approximate 0%, a far greater change.

The same applies to some stocks in an ideal stock market. For example, some companies may have a limit on their growth potential and yet have some chance of going bankrupt. The chance that these stocks could completley lose value should suggest that they are more likely to go up than to go down for their value to be what it is now.

Can someone suggest a concise replacement for "in which direction" that applies here?

Replies from: jimrandomh, Yvain, CronoDAS
comment by jimrandomh · 2009-03-08T04:57:45.366Z · LW(p) · GW(p)

Can someone suggest a concise replacement for "in which direction" that applies here?

Expected future expectation is always the same as the current expectation.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-03-08T05:02:12.159Z · LW(p) · GW(p)

Thanks Jim!

comment by Scott Alexander (Yvain) · 2009-03-08T11:10:20.830Z · LW(p) · GW(p)

You're right. Edited to Jim's version, although it sounds kind of convoluted. I'm going to keep an eye out for how real statisticians describe this.

comment by CronoDAS · 2009-03-08T04:57:23.413Z · LW(p) · GW(p)

Expected value?

comment by Dues · 2014-07-08T04:58:57.363Z · LW(p) · GW(p)

I wondered if this bias is really a manifestation of holding two contradictory ideas (a la belief in belief in belief). I wonder because, when past me was making this exact mistake, I notice that it tended to be a case of having a wide range of possible skill rank coupled with a low desire for accuracy.

If I think that my IQ is between 100 and 80 then I can have it both ways. I don't know that for sure, so I can brag: "Oh my IQ is somewhere below 100." because there is still a chance that their IQ is 100. However, if I am bout to be presented with an IQ test, I am tempted to be humble and say 80, because the test is probably going to prove me wrong in a positive way. That way I get to seem humble and smart, rather than overconfident and dumb.

Why are we surprised that the subjects were still trying to act is high status ways when they weren't being watched? This isn't like an experiment where I'm more likely to steal a candy bar if I'm anonymous. My reward for acting high status when no one is watching, is that I get to think of myself as a high status actor even when other people aren't watching. I always have an audience of at least one person. myself.

comment by A1987dM (army1987) · 2013-11-19T15:22:39.123Z · LW(p) · GW(p)

Of course, a really self-confident person would still take the inhibiting drug, because they are positive that they are going to ace the test anyway, and doing so while impaired is so much more awesome than while sober.

comment by Larks · 2009-08-19T19:42:16.992Z · LW(p) · GW(p)

Excellent post!

Males in the lucky guesser group chose the performance-inhibiting drug significantly more than those in the control group

I managed to guess this; my parents got it wrong. I thought that the control group would feel good about being right, and want it to occur more, whereas the unconfident group would feel (nihilistic? apathetic?), and so take the easy high of the drugs.

I confess I thought that the performance inhibiting drugs were euphoric; I couldn't imagine why anyone would take inhibiting drugs without some beneficial side effects. If this was wrong, I was effectively answering a different credit, so can’t really take credit for my guess.