Doublethink (Choosing to be Biased)

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-14T20:05:13.000Z · LW · GW · Legacy · 169 comments

Contents

169 comments

An oblong slip of newspaper had appeared between O'Brien's fingers. For perhaps five seconds it was within the angle of Winston's vision. It was a photograph, and there was no question of its identity. It was the photograph. It was another copy of the photograph of Jones, Aaronson, and Rutherford at the party function in New York, which he had chanced upon eleven years ago and promptly destroyed. For only an instant it was before his eyes, then it was out of sight again. But he had seen it, unquestionably he had seen it! He made a desperate, agonizing effort to wrench the top half of his body free. It was impossible to move so much as a centimetre in any direction. For the moment he had even forgotten the dial. All he wanted was to hold the photograph in his fingers again, or at least to see it.

'It exists!' he cried.

'No,' said O'Brien.

He stepped across the room.

There was a memory hole in the opposite wall. O'Brien lifted the grating. Unseen, the frail slip of paper was whirling away on the current of warm air; it was vanishing in a flash of flame. O'Brien turned away from the wall.

'Ashes,' he said. 'Not even identifiable ashes. Dust. It does not exist. It never existed.'

'But it did exist! It does exist! It exists in memory. I remember it. You remember it.'

'I do not remember it,' said O'Brien.

Winston's heart sank. That was doublethink. He had a feeling of deadly helplessness. If he could have been certain that O'Brien was lying, it would not have seemed to matter. But it was perfectly possible that O'Brien had really forgotten the photograph. And if so, then already he would have forgotten his denial of remembering it, and forgotten the act of forgetting. How could one be sure that it was simple trickery? Perhaps that lunatic dislocation in the mind could really happen: that was the thought that defeated him.

   —George Orwell, 1984

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, "And now, I will irrationally believe that I will win the lottery, in order to make myself happy."  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You're welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don't mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can't know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

Be irrationally optimistic about your driving skills, and you will be happily unconcerned where others sweat and fear.  You won't have to put up with the inconvenience of a seatbelt.  You will be happily unconcerned for a day, a week, a year.  Then CRASH, and spend the rest of your life wishing you could scratch the itch in your phantom limb.  Or paralyzed from the neck down.  Or dead.  It's not inevitable, but it's possible; how probable is it?  You can't make that tradeoff rationally unless you know your real driving skills, so you can figure out how much danger you're placing yourself in.  You can't make that tradeoff rationally unless you know about biases like neglect of probability.

No matter how many days go by in blissful ignorance, it only takes a single mistake to undo a human life, to outweigh every penny you picked up from the railroad tracks of stupidity.

One of chief pieces of advice I give to aspiring rationalists is "Don't try to be clever." And, "Listen to those quiet, nagging doubts."  If you don't know, you don't know what you don't know, you don't know how much you don't know, and you don't know how much you needed to know.

There is no second-order rationality.  There is only a blind leap into what may or may not be a flaming lava pit.  Once you know, it will be too late for blindness.

But people neglect this, because they do not know what they do not know.  Unknown unknowns are not available. They do not focus on the blank area on the map, but treat it as if it corresponded to a blank territory.  When they consider leaping blindly, they check their memory for dangers, and find no flaming lava pits in the blank map.  Why not leap?

Been there.  Tried that.  Got burned.  Don't try to be clever.

I once said to a friend that I suspected the happiness of stupidity was greatly overrated.  And she shook her head seriously, and said, "No, it's not; it's really not."

Maybe there are stupid happy people out there.  Maybe they are happier than you are.  And life isn't fair, and you won't become happier by being jealous of what you can't have.  I suspect the vast majority of Overcoming Bias readers could not achieve the "happiness of stupidity" if they tried.  That way is closed to you. You can never achieve that degree of ignorance, you cannot forget what you know, you cannot unsee what you see. 

The happiness of stupidity is closed to you.  You will never have it short of actual brain damage, and maybe not even then.  You should wonder, I think, whether the happiness of stupidity is optimal—if it is the most happiness that a human can aspire to—but it matters not.  That way is closed to you, if it was ever open.

All that is left to you now, is to aspire to such happiness as a rationalist can achieve.  I think it may prove greater, in the end. There are bounded paths and open-ended paths; plateaus on which to laze, and mountains to climb; and if climbing takes more effort, still the mountain rises higher in the end.

Also there is more to life than happiness; and other happinesses than your own may be at stake in your decisions.

But that is moot.  By the time you realize you have a choice, there is no choice.  You cannot unsee what you see.  The other way is closed.

169 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-14T20:12:16.000Z · LW(p) · GW(p)

PS: See also Scott Aaronson's classic On Self-Delusion and Bounded Rationality.

Replies from: Benvolio, jeremysalwen, None, MugaSofer, Algernoq
comment by Benvolio · 2010-07-13T06:40:20.326Z · LW(p) · GW(p)

I am not an island. There are a few good ways to set up a life of bounded bias or a rational decision about whether or not to engage in bias. I am a social creature and as such am acutely aware that most of my decisions are made as a mix of peer pressure, groupthink, discussions with friends, unconscious reasoning and whatever media I may have managed to digest in the past few hours. I have several friends, one of whom is a dedicated rationalist but a genuinely kind person, his name is Steve I have given him these instructions..::please give me unsolicited advice and interrupt me if you see me doing something stupid or immoral but only if you think I could emotionally cope with the reasons why my action was immoral:: I have another friend he's something of a spiritualist and currently some form of wiccan something or other. His name is Dave, also a kind person and he has explicit instructions. ::Please give me unsolicited advice and help me out if I seem to be unhappy Give me the course of action you think would make me happiest so long as it doesn't conflict with what Steve has told me to do. When I have to get a good think on about something I call steve and dave separately, then call them both together, and compare the three suggestions. What is interesting is I have done this often enough that I can often predict what each will say in a sort of mental role taking that is much easier if you imagine it not being you that has such thoughts. As such I have achieved some bounded bias, that is I am biogted enough to not be a social pariah in America (one must be somewhat prejudiced against someone to survive socially even if its only prejudiced against bigots and republicans) But rational enough not to fall for gambler's fallacies and at least bright enough to nod along when a modus ponens is explained to me using small words for the fourteenth time. Its not perfect, but its mine, Most people outsource their morality anyway., from what would jesus do, to local faith leaders to calling their parents for advice. I'm just a little more structured and deliberate. Through this system I can have someone have an unbiased view and speak to someone with a biased view and make a decision as to which is a better view to have without having to unsee everything. Yes I realize steve won't be perfectly unbiased every time or perfectly rational or make the right choices but then again, neither would I and there's nothing special about me making my mistakes.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-10T11:26:30.367Z · LW(p) · GW(p)

Yes I realize steve won't be perfectly unbiased every time or perfectly rational or make the right choices but then again, neither would I and there's nothing special about me making my mistakes.

A good principal in general. If more people realized this, the world would be a better place, I should think.

Hmm, I wonder if there's some snappy Wise Saying -esque way of formulating this

Replies from: SeanMCoincon
comment by SeanMCoincon · 2014-07-31T19:27:33.706Z · LW(p) · GW(p)

"I know I can never be perfect, but that's certainly not going to stop me from trying." --Sean Coincon

:D

comment by jeremysalwen · 2012-08-20T22:45:03.394Z · LW(p) · GW(p)

Perhaps I am just contrarian in nature, but I took issue with several parts of her reasoning.

"What you're saying is tantamount to saying that you want to fuck me. So why shouldn't I react with revulsion precisely as though you'd said the latter?"

The real question is why should she react with revulsion if he said he wanted to fuck her? The revulsion is a response to the tone of the message, not to the implications one can draw from it. After all, she can conclude with >75% certainty that any male wants to fuck her. Why doesn't she show revulsion simply upon discovering that someone is male? Or even upon finding out that the world population is larger than previously thought, because that implies that there are more men who want to fuck her? Clearly she is smart enough to have resolved this paradox on her own, and posing it to him in this situation is simply being verbally aggressive.

"For my face is merely a reflection of my intellect. I can no more leave fingernails unchewed when I contemplate the nature of rationality than grin convincingly when miserable."

She seems to be claiming that her confrontational behavior and unsocial values are inseparable from rationality. Perhaps this is only so clearly false to me because I frequent lesswrong.

"If it was electromagnetism, then even the slightest instability would cause the middle sections to fly out and plummet to the ground... By the end of class, it wasn't only sapphire donut-holes that had broken loose in my mind and fallen into a new equilibrium. I never was bat-mitzvahed."

This seems to show an incredible lack of creativity (or dare I say it, intelligence), that she would be unable to come up with a plausible way in which an engineer (never mind a supernatural deity) could fix a piece of rock to appear to be floating in the hole in a secure way. It's also incredible that she would not catch onto the whole paradox of omnipotence long before this, a paradox with a lot more substance.

"he eventual outcome would most likely be a compromise, dependent, for instance, on whether the computations needed to conceal one's rationality are inherently harder than those needed to detect such concealment."

Whoah, whoah, since when did cheating and catching it become a race of computation? Maybe an arms race of finding and concealing evidence, but when does computational complexity enter the picture? Second of all, the whole section about the Darwinian arms race makes the (extremely common) mistake of conflating evolutionary "goals" and individual desires. There is a difference between an action being evolutionarily advantageous, and an individual wanting to do it. Never mind the whole confusion about the nature of an individual human's goals (see http://lesswrong.com/lw/6ha/the_blueminimizing_robot/).

One side point is that the way she presents it ("Emotions are the mechanisms by which reason, when it pays to do so, cripples itself") is essentially presenting the situation as Newcomb's Paradox, and claiming that emotions are the solution, since her idea of "rationality" can't solve it on its own.

"By contrast, Type-1 thinking is concerned with the truth about which beliefs are most advantageous to hold."

But wait... the example given is not about which beliefs are most advantageous to hold... it's about which beliefs it's most advantageous to act like you hold. In fact, if you examine all of the further Type-X levels, you realize that they all collapse down to the same level. Suppose there is a button in front of you that you can press (or not press). How could it be beneficial to believe that you should push the button, but not beneficial to push the button? Barring of course, supercomputer Omegas which can read your mind. You're not a computer. You can't get a core dump of your mind which will show a clearly structured hierarchy of thoughts. There's no distinction to the outside world between your different levels of recursive thoughts.

I suppose this bothered me a lot more before I realized this was a piece of fiction and that the writer was a paranoid schizophrenic (the former applying to most else of what I am saying).

"Ah, yet is not dancing merely a vertical expression of a horizontal desire?"

No, certainly not merely. Too bad Elliot lacked the opportunity (and probably the quickness of tongue) to respond.

"But perplexities abound: can I reason that the number of humans who will live after me is probably not much greater than the number who have lived before, and that therefore, taking population growth into account, humanity faces imminent extinction?..."

Because I am overly negative in this post, I thought I'd point out the above section, which I found especially interesting.

But the whole "Flowers for Algernon" ending seemed a bit extreme...and out of place.

Replies from: MugaSofer, Caperu_Wesperizzon
comment by MugaSofer · 2013-01-10T11:30:03.312Z · LW(p) · GW(p)

she can conclude with >75% certainty that any male wants to fuck her.

... she can? Really? That seems pretty damn high for something as variable as taste in partners.

EDIT: wait, that's a reference to how many guys on a university campus will accept offers of one night stands, right? It's still too high, or too general.

Replies from: jeremysalwen
comment by jeremysalwen · 2013-01-12T07:39:44.869Z · LW(p) · GW(p)

It's also irrelevant to the point I was making. You can point to different studies giving different percentages, but however you slice it a significant portion of the men she interacts with would have sex with her if she offered. So maybe 75% is only true for a certain demographic, but replace it with 10% for another demographic and it doesn't make a difference.

Replies from: MugaSofer, liliet-b
comment by MugaSofer · 2013-01-13T10:32:35.833Z · LW(p) · GW(p)

Oh, it certainly doesn't affect your point. I agree with your point completely. I was just nitpicking the numbers.

comment by Liliet B (liliet-b) · 2019-12-07T13:51:51.174Z · LW(p) · GW(p)

It does affect your point.

comment by Caperu_Wesperizzon · 2022-08-30T11:31:51.444Z · LW(p) · GW(p)

I've promised to shut up in the comments to the other post, but since that story's been brought up here, too ...

The real question is why should she react with revulsion if he said he wanted to fuck her? The revulsion is a response to the tone of the message, not to the implications one can draw from it.

Is it, though? Is there any possible tone that would make it acceptable?

After all, she can conclude with >75% certainty that any male wants to fuck her. Why doesn't she show revulsion simply upon discovering that someone is male? Or even upon finding out that the world population is larger than previously thought, because that implies that there are more men who want to fuck her?

So the message is redundant. Therefore, the appropriate way to express it is to say nothing at all. Anything else, regardless of its tone, forces her to pay needless attention to an obvious fact and is therefore an aggression. Especially if the speaker is not so attractive that considering potential partners of his attractiveness level is actually worth her time. Especially if he's not just insufficiently attractive, but net repulsive, i.e., she'd rather not have sex ever again than have it with him. Of course, a nerdier and less sporty male classmate would be even more repulsive.

Clearly she is smart enough to have resolved this paradox on her own, and posing it to him in this situation is simply being verbally aggressive.

Or a way to test him, and he obviously failed.

No, certainly not merely.

I wonder what counts as not merely.

But the whole "Flowers for Algernon" ending seemed a bit extreme...and out of place.

I didn't even realize it was supposed to be a horror story. She basically did what should have been expected from biology: she chose a high-quality mate who can afford to profess irrational nonsense on the handicap principle, and will most likely breed with him and be happy. It's only sad to those who would like her to be prevented from doing what she wants, for whatever selfish reasons.

comment by [deleted] · 2013-01-09T23:57:55.803Z · LW(p) · GW(p)

This post and the linked story scared the heck out of me. Thanks for the thought-provoking material.

comment by MugaSofer · 2013-01-11T10:07:20.544Z · LW(p) · GW(p)

I suspect the vast majority of Overcoming Bias readers could not achieve the "happiness of stupidity" if they tried. That way is closed to you. You can never achieve that degree of ignorance, you cannot forget what you know, you cannot unsee what you see.

The happiness of stupidity is closed to you. You will never have it short of actual brain damage, and maybe not even then. You should wonder, I think, whether the happiness of stupidity is optimal—if it is the most happiness that a human can aspire to—but it matters not. That way is closed to you, if it was ever open.

All that is left to you now, is to aspire to such happiness as a rationalist can achieve.

So, to be clear, you don't think that such neurohacking as presented in the story is possible?

That said, I think you've found a pretty convincing argument that we shouldn't accept the tradeoff, even if it's available. That is one scary piece of writing.

comment by Algernoq · 2014-08-19T02:02:30.321Z · LW(p) · GW(p)

Relevant: Paul Graham, Why Nerds are Unpopular

Paul Graham argues that a nerd is anyone not primarily focused on popularity, and that nerds lose the competitive and zero-sum game of popularity to those who aren't distracted by things like studying. After nerds enter the real world, however, they can form their own special-interest communities and often do very well.

Regarding Aaronson's piece, ditziness as signaling makes sense. However, the protagonist failed to see other options: she could have "won" by making the first moves to date an attractive but passive/malleable and socially clueless boy. She could have really "won" by stringing along several passive/malleable/clueless boys. Instead, she sold her soul to stay with the next random guy who asked her out after her "realization", because being alone was more painful. She didn't realize that her understanding of evolutionary theory and rationality failed to make up for her lack of domain knowledge about dating/relationships.

Replies from: Caperu_Wesperizzon
comment by Caperu_Wesperizzon · 2022-08-30T11:47:47.551Z · LW(p) · GW(p)

she could have "won" by making the first moves to date an attractive but passive/malleable and socially clueless boy.

Assuming one of those could even begin to compete with a jock, which I greatly doubt.

She could have really "won" by stringing along several passive/malleable/clueless boys.

That could work if she gets them to support her while she cheats on them with Elliot and he's her children's biological father, yeah.

comment by TGGP4 · 2007-09-14T20:46:07.000Z · LW(p) · GW(p)

"believing you're happy" and "in fact happy" strike me as distinctions without distinction. How are they falsifiable?

Replies from: Acidmind
comment by Acidmind · 2012-08-20T10:45:55.250Z · LW(p) · GW(p)

By comparing a written self-evaluation and serotonin and dopamine levels in ones brain, perhaps?

Replies from: hannahelisabeth
comment by hannahelisabeth · 2012-11-11T22:19:08.118Z · LW(p) · GW(p)

How would you calibrate a brain scan machine to happiness except by comparing it to self-evaluated happiness? You only know that certain neural pathways correspond to happiness because people report being happy while these pathways are activated. If someone had different brain circuitry (like, say, someone born with only half a brain), you wouldn't be able to use this metric except by first seeing how their brain pattern corresponded to their self-reported happiness. It seems to me that happiness simply is the perception of happiness. There is no difference between "believing you're happy" and "being happy." You can't be secretly happy or unhappy and not know it, 'cause that wouldn't constitute happiness.

Replies from: Peterdjones, Kindly
comment by Peterdjones · 2012-11-11T22:43:48.371Z · LW(p) · GW(p)

There's no self-deception, then?

Replies from: hannahelisabeth
comment by hannahelisabeth · 2012-11-12T08:40:18.267Z · LW(p) · GW(p)

Only retroactively. Our memories are easy to corrupt. But no, I don't think you can be happy or unhappy at any given moment and simultaneously believe the opposite is true. There's probably room for the whole "belief in belief" thing here, though. That is, you could want to believe you're happy when you're not, and could maybe even convince yourself that you had convinced yourself that you were happy, but I don't think you'd actually believe it.

Replies from: Peterdjones
comment by Peterdjones · 2012-11-12T10:18:40.758Z · LW(p) · GW(p)

You haven't given any evidence for those claims. At one time it was believed that minds were indestructible, atomic entities, but now we know we have billions of neurons there is plenty of scope for one neuronal cohort to believe or feel things that another does not.

Replies from: hannahelisabeth
comment by hannahelisabeth · 2012-11-13T15:48:31.703Z · LW(p) · GW(p)

Sure, that's true. I suppose you could have a split-brain person who is happy in one hemisphere and not in the other, or some such type of situation. I guess it just depends on what you're looking for when you ask "is someone happy?" If you want a subjective feeling, then self-report data will be reliable. If you're looking for specific physiological states or such, then self-report data may not be necessary, and may even contradict your findings. But it seems suspect to me that you would call it happiness if it did not correspond to a subjective feeling of happiness.

comment by Kindly · 2012-11-12T01:54:59.097Z · LW(p) · GW(p)

It's hard to be mistaken about how happy you are at the precise moment you're asked the question (you might have trouble reporting exactly how happy you are, but that's different). However, if you want to know how happy you've been over the past month, for example, it's possible to be wrong about that; you could be selectively remembering times you were more or less happy than average.

Replies from: hannahelisabeth
comment by hannahelisabeth · 2012-11-12T08:37:43.701Z · LW(p) · GW(p)

True. Still, the method of measuring serotonin and dopamine levels would offer no benefit over a self-evaluation, since you can't implement it retroactively.

comment by Tom_Breton · 2007-09-14T20:51:22.000Z · LW(p) · GW(p)

What if self-deception helps us be happy? What if just running out and overcoming bias will make us - gasp! - unhappy?

You are aware, I'm sure, of studies that connect depression and freedom from bias, notably overconfidence in one's ability to control outcome.

You've already given one answer: to deliberately choose to believe what our best judgement tells us isn't so would be lunacy. Many people are psychologically able to fool themselves subtly, but fewer are able to deliberately, knowingly fool themselves.

Another answer is that even though depression leads to freedom from some biases and illusions, the converse doesn't seem to apply. Overcoming bias doesn't seem to lead to depression. I don't get the impression that a disproportionate number of people on this list are depressed. In my own experience, losing illusions doesn't make me feel depressed. Even if the illusion promised something desirable, I think what I have usually felt was more like intellectual relief, "So that's why (whatever was promised) never seemed to work."

Replies from: FrancesH, hannahelisabeth
comment by FrancesH · 2010-12-04T20:55:01.781Z · LW(p) · GW(p)

Agreed. I always feel profoundly relieved and even moderately triumphant.

Replies from: Acidmind
comment by Acidmind · 2012-08-20T10:51:19.417Z · LW(p) · GW(p)

I can even experience a slight stroke of euphoric lunacy upon the shattering of my delusions. Somehow the world seems to burn brighter without the blurry lenses that biases provide.

comment by hannahelisabeth · 2012-11-11T22:30:09.588Z · LW(p) · GW(p)

I'd heard of the connection between depression and more accurate perceptions (notably, more accurate predictions due to less overconfidence), but I wasn't aware of the causal direction. It had been portrayed to me as being that the improved perception of reality was the cause of the depression. Or maybe I just mistakenly inferred it and didn't notice. I didn't know it actually went the other way, though now that I think about it, that actually makes a lot of sense.

Personally, I find that imroved map-territory correspondence leads to more happiness, at least the improved rationality which results from learning Rational Emotive Behavior Therapy. It's not just losing illusions that helps. It's better understanding yourself, better understanding what is actually causing your emotions, and realizing that you have a more internal locus of control rather than external regarding your emotions. It's liberating to be able to stop an emotional reaction in its tracks, analyze it, recognize it as following from an irrational belief, and consequently substitute the irrational emotion for a rational one. It helps especially with anger and anxiety, as those have a tendency to result from irrational, dogmatic beliefs.

comment by pdf23ds · 2007-09-14T21:04:51.000Z · LW(p) · GW(p)

Depression is specifically linked to reducing overconfidence. People more accurately assess their own abilities (and perhaps others' abilities as well). I'm not aware that it's linked to decreasing other biases.

comment by g · 2007-09-14T21:20:41.000Z · LW(p) · GW(p)

"How happy is the moron: / He doesn't give a damn. / I wish I were a moron. / -- My God, perhaps I am!"

Or, in other words, wanting to be stupid is itself a form of stupidity.

comment by James_Bach · 2007-09-14T21:29:48.000Z · LW(p) · GW(p)

I'm pleased to say that, through a great deal of study and practice, I have learned how to unlearn things that I know. This is called skepticism. A key to it is the ability to imagine plausible alternatives to whatever is believed. Descartes is famous for developing this idea, although he was constrained by his society from completely embracing it. Pyrrho and Sextus Empiricus developed this idea, but their community was persecuted and destroyed by the Christians, too.

Skepticism is not opposed to rationality, but neither does it accept that a rationally derived solution to a problem is necessarily the best solution (unless you define rationality as whatever leads to the best solution, in which case you have to abandon the notion of a rational methodology).

My wife is an ongoing experiment and example for me, because she seems to live her life almost entirely without rationality and critical thinking as I recognize it. She lives instead by pattern matching and by the process of comparing real and anticipated feelings. You feel superior to her. Well, she feels superior to you. Is there a non-biased process that can decide who is right? Sure there is: mutation and natural selection. My wife is the product of billions of years of evolution, as are you. So, it seems to be a tie...

I like being "smart" and "analytical". It's my kind of game. I find symbolic logic fascinating. I write software using my logical mind. I enjoyed reading your wonderful tutorial on Bayesian reasoning, though I already knew the material, having read the Cartoon Guide to Statistics and the works of Tversky and Kahneman, years ago. But not since 1920 or so has it been possible to make a fully rational case for living a fully rational life. To do that you have to base your reasoning on premises, and that leads to the infinite regress problem. You have to map your premises to reality, but you don't have direct access to reality.

I'm not attacking rationality. I love it. But why be biased in favor of it? Why not just do what works for you and leave it at that?

Replies from: adamisom
comment by adamisom · 2011-07-18T14:56:57.903Z · LW(p) · GW(p)

Because being rational isn't just something fun to play with. It's aiming to correspond your beliefs and actions with reality, which will eventually catch up. Nothing you've said here indicates that you actually have read this blog.

Replies from: Desrtopa
comment by Desrtopa · 2011-07-18T15:08:38.916Z · LW(p) · GW(p)

To be fair, this comment was made before most of the blog had been written.

comment by Tiiba2 · 2007-09-14T22:19:09.000Z · LW(p) · GW(p)

This thing about depressed people being unbiased makes no sense to me. Maybe they're not overconfident, but aren't they underconfident instead? I'd find it pretty surprising if a mental illness was correlated with common sense.

Anyway, perhaps the key to being rational and happy is suppressing not facts, but fear of them. No, you can't have a pony. Get over it.

Replies from: hannahelisabeth
comment by hannahelisabeth · 2012-11-11T22:37:32.885Z · LW(p) · GW(p)

I think it's not underconfident because our over-confidence is so high that it really is hard to be pessimistic enough to match reality. Depressed people seem to have just enough pessimism to compensate (but not overcompensate) for this bias. I don't think that necessarily makes them have more common sense. Even just in terms of being more realistic, this is only one bias that they compensate for. It's not like depression magically cures any of the other biases.

Depressed people also have a tendency to have an external locus of control, and that is not necessarily rational. You may not be able to control the situations you're in, but it's often the case that your actions do have a significant impact on them, so believing that you have very little or no control is often not rational.

comment by g · 2007-09-14T22:38:07.000Z · LW(p) · GW(p)

Tiiba: "makes no sense" and "would be surprising" are very different things, and the former is excessive for the claim about depressed people. The level of confidence that's optimal for making correct predictions about the world could be much lower than the level that's optimal for living a happy life. Do you have some way of knowing that it isn't?

(Let me forestall one argument against by remarking that evolution is not in the business of maximizing our happiness.)

comment by paul3 · 2007-09-14T23:33:26.000Z · LW(p) · GW(p)

This post strikes me as being pretty arrogant. Actually the whole blog tends in that direction, but this post especially, where the author finally makes clear the dichotomy between the readers of this blog (the uber-rationalists) and everyone else (the stupid).

When your world view causes you to believe that everyone who is not single-mindedly pursuing your worldview is stupid, I think you should treat that as a warning sign of bias. Even if your world view is about overcoming bias.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-14T23:39:31.000Z · LW(p) · GW(p)

Um, there are readers of this blog, and there are people who enjoy the "happiness of stupidity" (which is not the same as just having a low IQ; it involves other personality traits as well). I don't think there's much overlap between those two groups. But they are far from being the only two groups in the world, and there is no dichotomy between them.

Replies from: Lethalmud
comment by Lethalmud · 2013-07-04T10:34:57.335Z · LW(p) · GW(p)

This is interesting. When I first discovered LW, I was reading The Praise of Folly by Erasmus. He argues, among other things, that all emotions and feelings that make life worthwhile are inherently imbedded in stupidity. Love, friendship optimism and happiness require foolishness to work. Now is it very hard to compare a sixteenth century satirical piece with a current rational argument, but I have observed that intelligence and stupidity don't seem to be mutually exclusive. From where comes your assumption that intelligent, rational people can't be stupid? Emotions don't tend to be rational, and in the force of a strong one like love even the most intelligent and rational person can turn into an optimistic fool, sure that their loved one is infinitely more trustworthy than the average human, and statistics on adultery don't apply in this case. Should you try to overcome the bias of strong emotions? Can you overcome it at all? I have never seen someone immune to it. So maybe the happiness of stupidity is still available to all of us.

comment by Hopefully_Anonymous · 2007-09-14T23:41:55.000Z · LW(p) · GW(p)

My understanding is that happiness is a product of biochemistry and neuroanatomy, and doesn't have to inherently correlate with any knowledge, experience, or heuristic.

Replies from: Davidmanheim
comment by Davidmanheim · 2011-01-20T03:46:45.273Z · LW(p) · GW(p)

First, it makes no sense to claim that there is no connection between experiences and biochemistry; clearly some experiences cause certain biochemical reactions. Eating or sleeping have clear neurochemical results. It doesn't need to "inherently correlate," it certainly does, however, correlate. The logic behind it could be obscured, but that does not imply that they cannot be used to test and establish causality. That's why we attempt to better approximate rationality, to find how reality does and doesn't work.

To answer the substantive point, knowledge, experience, and heuristics are emergent properties of biochemistry and neuroanatomy; of course there is a relationship between the substrate and the emergent properties. The precise nature of the interaction in a complex system can be deduced either through correlation of different systems and the behavior of the substrate, in the way gross neuroimaging locates the proximate location of whatever stimulus is being studied, or better, a full understanding of how the system works, which we do not have.

With a full understanding, we could discuss the necessary or inherent correlation, but until then, we can reasonably discuss only the actual behavior of the system. So the question of whether certain knowledge or experiences cause happiness is a reasonable one.

Replies from: bigjeff5
comment by bigjeff5 · 2011-02-28T02:54:49.123Z · LW(p) · GW(p)

My understanding is that happiness is a product of biochemistry and neuroanatomy, and doesn't have to inherently correlate with any knowledge, experience, or heuristic.

Hopefully Anonymous never claimed there was no connection between experiences and biochemistry, only that the two weren't inherently linked.

If they were inherently linked, then you could not have happiness without certain experiences, and those same experiences would always increase your happiness. Personal experience and the fact that clinical depression exists tells me this cannot be true. The fact that a chemical imbalance alone can eliminate your happiness completely regardless of what your actual experience may be is proof that happiness is primarily a function of biochemistry.

The fact that certain experiences make us more happy shows that experiences can influence our biochemistry, but the two are most certainly not inherently linked.

comment by Ryan_Holiday · 2007-09-14T23:43:18.000Z · LW(p) · GW(p)

Does having an explanatory style personality (ie delusional optimism) lead to reduced rates of depression and increased happiness?

http://www.psych.nyu.edu/oettingen/OETTINGEN1995EXPLANATORY.PDF

comment by Michael4 · 2007-09-15T00:58:30.000Z · LW(p) · GW(p)

"Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen."

Have you talked to any religious people lately? "Oh, the tornado ripped my neighbors house off the foundations, but we were spared. I guess God was looking out for us!"

Could anyone say that without willfully blinding themselves? Do they really think they are better people than their neighbors, and that God moved the tornado away from their house? Yet you hear stuff like this all the time. And I think they really believe it.

The ability to delude ourselves seems to be one of our main survival traits. Rational people would never take the stupid chances that result in progress. Evolution has favored a species that buys lottery tickets.

Replies from: FiveMuru
comment by FiveMuru · 2018-09-14T19:04:05.499Z · LW(p) · GW(p)

One can have two pictures in a room: one of a man leaping into a sure-death trap labeled 3/4/11, the other of the same man sitting with his family labeled 3/5/11. The picture is not the thing itself and so unless one attempts to force homogeneity on the depictions there will be no laws of reality wrathfully reaching out and destroying one of the photos.

Have you ever used Prolog? It is easy to tell a program (or a mind) conflicting information. Once something is recorded, it is 'believed,' because no matter what, that data exists on it's own as a subset of whatever it was recorded with and thoughts regarding that information and so on.

I mean to say that I struggle with this article at an even more fundamental level than counter-evidence on this point. I tend towards disbelief that anyone can wholly believe anything. Isn't it just that given the associations in their brains they 'tend' to think of a particular set of data first, or that if they do recall the other set of data first there is a good chance they will also recall the association of that data with determinations about its falsification?

If I am correct, perhaps things which we determine are important to us should be carefully re-iterated to ourselves. Furthermore, there may be a great deal of potential for people described as 'irrational' to act rational when it suits them if a need inspires a different pattern of thought than their norm.

comment by Robin_Hanson2 · 2007-09-15T02:18:50.000Z · LW(p) · GW(p)

Surely, true wisdom would be second-order rationality, choosing when to be rational. ... You can't know the consequences of being biased, until you have already debiased yourself. And then it is too late for self-deception. The other alternative is to choose blindly to remain biased, without any clear idea of the consequences. This is ... willful stupidity.

This isn't quite fair. While it is true that you couldn't know the detailed consequences of being biased, you could make a rational judgment under uncertainty, given what you do know. And it should be possible to for your best judgment in this situation to be that you are better off biased. Of course this mere possibility does not mean that you are in fact better of being biased.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-15T03:53:25.000Z · LW(p) · GW(p)

While it is true that you couldn't know the detailed consequences of being biased, you could make a rational judgment under uncertainty, given what you do know.

Yes, but for it to be a rational judgment under uncertainty, you would have to take into account the unknown unknowns, some of which may be Black Swans (where rare events accounts for a significant fraction of the total weight), plus such well-known biases as overconfidence and optimism. Think of all that worrying you'll have to do... maybe you should just relax...

My own life experience suggests that any black box should be assumed to contain a Black Swan. (Or to be precise, a substantial probability of such, rather than probability 1.0.)

comment by Rob_Spear · 2007-09-15T05:52:59.000Z · LW(p) · GW(p)

State legitimacy is similarly based on such self deception, whether it uses the traditional "'cos God says so" approach, or the more modern, "'cos we won a popularity contest." idea: in neither circumstance is there any real reason why people in general should act as if the state has the right to make laws and manage people, and yet it does, apparently to the general good unless you happen to be a radical libertarian.

Surely this is the same as the happiness case: by having most people in a nation sharing the delusional belief in the legitimacy of the state, the nation as a whole benefits.

comment by Robin_Hanson2 · 2007-09-15T08:25:36.000Z · LW(p) · GW(p)

Eliezer, we are in essence talking about a value of info calculation. Yes, such a calculated info value rises with rare important things you might know if you had the info. But even so it is not guaranteed that info will be worth the cost. Similarly, it is not guaranteed that our choosing to avoid bias will be worth the costs.

It seems to me simpler to just say that given our purposes we judge better overcoming our biases to in fact be cost-effective on the topics we emphasize here. The strongest argument for that seems to me that we emphasize topics where our evolved judgments about when we can safely be biased are the least likely to be reliable guides to social, as opposed to personal, value.

comment by J_Thomas · 2007-09-15T13:04:04.000Z · LW(p) · GW(p)

...you will think to yourself, "And now, I will irrationally believe that I will win the lottery, in order to make myself happy." But we do not have such direct control over our beliefs. You cannot make yourself believe the sky is green by an act of will.

In my experience, this is not true.

My father was a dentist, and when I was 7 he learned hypnosis to use to anesthetise his patients. Of course he practiced on me while he was learning. (As it turned out, he did successful anesthesia with it for a few years before people started spreading stories that hypnosis was dangerous mind-control and he quit.)

With posthypnotic suggestion people can easily believe things that they have no reason to believe, remember things they did not experience, and ignore their senses up to a point. I've done it. It all feels real.

I learned to hypnotise people a little, and I learned how to do it on myself. It certainly can be done. You do have that control over your beliefs, if you're willing to use it.

Which is not to say it's a good idea. IME the main time it's useful to make yourself believe something is when you have nothing to lose by burning your bridges, when you lose everything anyway if the belief is wrong. Then you might as well believe it wholeheartedly.

I've read that interest in hypnosis has something like an eleven year cycle. People start to think there's something interesting there. They start studying it, and get some fascinating results that look some ways powerful. Then as they keep studying they find that all the unexpected things people can do under hypnosis they can also do without hypnosis. And then they start to see that a lot of people are basicly walking around hypnotised a lot of the time. They start to wonder what exactly they're studying, and they quit, and after the subject lies fallow awhile more people get interested and it starts again.

Basicly all it takes for hypnosis is that the person relax and listen uncritically. If they're willing to believe what they're told, they're hypnotised. All the peculiar abilities people sometimes display when told to under hypnosis, are things they could do but normally don't believe they can do. When they give up their scepticism they go ahead and do their best instead of doubting themselves and hesitating. They're willing to believe delusions for somebody they trust, and when the limits of the trust show up or they get emphatic evidence against the delusion, then they rethink.

You really can deceive yourself. You can build false memories and believe them. You can make the sky look a little green, particularly on a cloudy day, and you can build on that until it looks pretty green -- provided the idea of a green sky doesn't offend you too much. If you believe it's impossible you can't see it. If it's "I didn't know that was even possible, I wonder why it's happening now?" then you can.

These are things that anybody can learn to do. But I mostly agree with your arguments that it is not generally a useful skill. If I get a toothache I don't anesthetise it until after I get my dentist appointment, and if I miss the appointment the pain comes back. Pain is your signal that something is going wrong with your body, and in general it's a bad idea to ignore that.

Replies from: David_Gerard, hannahelisabeth
comment by David_Gerard · 2011-01-19T09:01:12.895Z · LW(p) · GW(p)

False memories are horrifyingly easy to induce. Here is a Scientific American story on the subject from 1997, and here is a scary story from an ex-Scientologist about how to induce false memories using Scientology auditing. "Up to this day, I intellectually know that this story was a fiction written by a friend of mine, but still I have it in vivid memory, as if I was the very person that had experienced it. I actually can't differentiate this memory from any other of my real memories, it still is as valid in my mind as any other memory I have."

Human memories are untrustworthy. This leads to a philosophical dilemma about whether or not to trust your memory, and how much, and what you're supposed to use if you can't trust your memory.

comment by hannahelisabeth · 2012-11-11T23:18:59.673Z · LW(p) · GW(p)

Not everyone can be hypnotized. About a quarter of people can't be hypnotized, according to research at Stanford.

I've tried to be hypnotized before and it didn't work. I think I'm just not capable of making myself that open to suggestion, even though I would have liked to have been hypnotized.

I heard from one of my psychology professors that those on the extreme ends of the IQ spectrum (both high and low) have more trouble being hypnotized, but I'm not sure if this is actually true. The Stanford research showed that hypnotizability wasn't correlated with any personality traits, but I probably wouldn't consider IQ a personality trait.

comment by Hopefully_Anonymous · 2007-09-15T13:33:16.000Z · LW(p) · GW(p)

"Evolution has favored a species that buys lottery tickets."

It's (statistically) bad for the individual but good for the species. Although even buying lottery tickets -or the other natural equivalents is probably deoptimized behavior. I imagine there's some bayesian optimized approach for a species and the spectrum of risk taking its members would engage in. In contrast I suspect our species performs functionally rather than optimally.

Replies from: tlhonmey
comment by tlhonmey · 2021-01-09T02:02:09.842Z · LW(p) · GW(p)

From a purely probabilistic point of view, laying aside personal skill at anything in particular, the odds of a randomly selected lottery player winning the jackpot are probably better than those of a randomly selected garage or dorm-room tinkerer being the next Jobs or Bezos.

So yeah, you probably can't breed out lottery-playing without also seriously damaging the entrepreneurial spirit.

comment by Tiiba2 · 2007-09-15T15:58:11.000Z · LW(p) · GW(p)

Forgive me, Master Eliezer, for I have sinned.

I have come to realize that inside my mind is not merely self-delusion, but a full-blown case of doublethink. There are two mutually exclusive statements that I simutaneously hold to be unquestionably true. Here they are:

1) I should not cause suffering to others. 2) Only my own happiness really matters.

I can even explain this doublethink. I am naturally selfish, but society makes me be good. I could try to believe that only I matter, and do good things only for the show, but that strategy doesn't work for most people. Being good is too complex.

This doublethink creates intresting effects. When I read about context insensitivity, I wondered if that's really a bias, or just apathy masquerading as concern. I'd probably give the same amount to save five birds as I would to save Atlantis from sinking. Both are social acts.

I also wonder about coherent extrapolated volition. What will it find when it extrapolates us? That we all want the whole pie? That we would gladly exterminate everyone else if we could get away with it?

Replies from: tlhonmey
comment by tlhonmey · 2021-01-09T02:11:38.749Z · LW(p) · GW(p)

Those two aren't necessarily mutually exclusive though.  Only your own happiness really matters to you, but at the same time you are a finite being and so can't do everything for yourself.  So, on average, your best strategy is to recruit allies who are willing to help you attain happiness.

And while there may be short-term advantages to hurting others to benefit yourself, the best long-run strategy is to be the cause of as little suffering as possible because dishing out suffering makes other people less likely to help you with your own goals.

The fact that this strategy does sometimes spectacularly fail doesn't change the fact that it's your best bet.  At least until you get to be old enough that it's time to start cashing in favors because long-term investments are no longer likely to pay off.  And even then it still pays to not alienate your friends.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-15T17:04:06.000Z · LW(p) · GW(p)

"Evolution has favored a species that buys lottery tickets."

It's (statistically) bad for the individual but good for the species.

This is a group selection argument. (If you don't know what that means, it's something that biologists use to scare their children.) Evolution does not operate on species. It operates on individuals. Genes that are statistically bad for individuals drop out of the gene pool no matter what they do for the species.

This is an ancient and thoroughly discredited idea. See George Williams's "Adaptation and Natural Selection."

Replies from: kaimialana
comment by kaimialana · 2010-07-26T22:17:44.688Z · LW(p) · GW(p)

Actually, there can be multi-level selection (MLS theory; cf. http://en.wikipedia.org/wiki/Group_selection#Multilevel_selection_theory) when there is competition between groups. In the same sense there is selection between individuals when there is competition between individuals, or the competition between genes popularized by Richard Dawkins.

http://www.americanscientist.org/my_amsci/restricted.aspx?act=pdf&id=16386020847008http://www.americanscientist.org/my_amsci/restricted.aspx?act=pdf&id=16386020847008 is a good primer.

This is the best solution for Darwin's problem of ant colonies, even better than haplodiploidy. I thought I would come out of lurking while reading through the sequences to mention this, since multi-level selection was demonized during the 70s under the name "group selection" due to some overzealous proponents. So, while we would not say "evolution has favored a species that buys lottery tickets", we might hypothesize evolution favors human societies that buy lottery tickets when under competition with other societies that do not (as an example).

Replies from: tlhonmey
comment by tlhonmey · 2021-01-09T02:20:09.444Z · LW(p) · GW(p)

The other possibility, of course, is that the predisposition that causes some people to want to buy lottery tickets also causes some other behaviour that is more beneficial than the ticket-buying is harmful.  Evolution may eventually sort these two out, but changes that subtle can take a long time.

For example, having two copies of the mutation that causes sickle-cell anemia will almost inevitably kill you.  But having just one copy of that mutation makes you practically immune to malaria.  So in an environment where malaria is sufficiently prevalent the immunity of the lucky is a sufficient advantage to offset their higher proportion of dead children.

comment by Hopefully_Anonymous · 2007-09-15T17:36:43.000Z · LW(p) · GW(p)

Eliezer, I mentioned behaviors/biases that are statistically bad for the individual, not genes. Also, I'm interested in your take on the idea that the existence of humans with a range of different biases can be good for other humans, even if it's not optimal from the perspective of the person with the bias.

comment by Tiiba2 · 2007-09-15T17:38:19.000Z · LW(p) · GW(p)

"When I read about context insensitivity, I wondered if that's really a bias, or just apathy masquerading as concern. I'd probably give the same amount to save five birds as I would to save Atlantis from sinking. Both are social acts."

I want to clarify. I do believe in context insensitivity, but think indifference was also a factor in the donation case.

comment by J_Thomas · 2007-09-15T18:51:06.000Z · LW(p) · GW(p)

Genes that are bad for many of the individuals that carry them but that have large jackpots can be selected. As for how you tell whether the occasional large jackpot makes up for the common failure, it takes a long time to tell.

With lotteries you can judge by the house. They're in business to make money, they have wealth that they got from previous lotteries, it makes sense the odds are against you in the longterm. But that reasoning doesn't work in general.

Human beings who see jackpot events happen will sometimes gamble for long times without winning a jackpot. If they didn't they couldn't win. They lose cumulatively while they wait. It takes as long time to find out by trial and error whether they win on average or not, and if they don't try long enough they don't find out what the odds really are.

comment by TGGP4 · 2007-09-15T19:34:57.000Z · LW(p) · GW(p)

Eliezer, do you concede that there is no difference between "believing you're happy" and "really being happy"?

HA, I was surprised you stumbled into that one. A good introductory example of how evolution does not optimize at the species but at the gene-level can be found here. It is by Richard Dawkins, who is also known for the term "meme", which is an idea that can be analyzed like a gene. Unless the meme that buying lottery tickets is a good idea is beneficial for those that hold it, we should not expect it to become prevalent even it if benefits the species. You can find other good posts from Razib on "group selection" if you look for them.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-15T19:42:13.000Z · LW(p) · GW(p)

Eliezer, do you concede that there is no difference between "believing you're happy" and "really being happy"?

No. There is a difference between believing you love your stepchildren and loving your stepchildren, between believing you're deeply upset about rainforests and being deeply upset about rainforests, and between believing you're happy and being happy.

As soon as you turn happiness into an obligatory sign of spiritual health, a sign of virtue, people will naturally tend to overestimate their happiness.

Falsifiable difference? Put 'em in an fMRI or use other physiological indicators.

Replies from: Peacewise
comment by Peacewise · 2011-10-29T10:55:42.958Z · LW(p) · GW(p)

Perhaps the TED lecture by Dan Gilbert might cast some illumination upon whether there is a difference between believing you're happy and really being happy.

http://www.ted.com/talks/lang/eng/dan_gilbert_asks_why_are_we_happy.html

Sounds to me like what's being discussed is : is synthetic happiness the same as happiness. Dan Gilbert argues that they are the same.

Replies from: Peacewise
comment by Peacewise · 2011-10-30T00:14:15.867Z · LW(p) · GW(p)

What's the -1 for please?

Replies from: Alicorn, wedrifid, lessdazed, Unnamed
comment by Alicorn · 2011-10-30T00:52:16.451Z · LW(p) · GW(p)

Please don't ask this for every comment of yours that is downvoted, at least until you can reliably make comments that aren't downvoted. It clutters the recent comment threads.

(ETA: I posted this in response to two of the same query being issued in a row. I don't object to people asking why they were downvoted when it's occasional.)

Replies from: Peacewise, lessdazed
comment by Peacewise · 2011-10-30T01:31:49.222Z · LW(p) · GW(p)

Perhaps you should read the

http://lesswrong.com/lw/2ku/welcome_to_less_wrong_2010/

page, where it is stated

"However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.)"

I'm doing what is suggested as the etiquette.

Replies from: Manfred
comment by Manfred · 2011-10-30T03:27:39.642Z · LW(p) · GW(p)

Goodness, I for one would dislike it if people started doing that all the time (sometimes, it says, which is an apparently informative way of saying "Between 0 and 100% of the time").

The downside of doing it often is that it makes people feel like you're asking for an explanation without putting in any noticeable effort to understand. Writing things that are nice to read generally does take effort. I would recommend only asking if you are genuinely confused after a good sixty seconds of uninterrupted thought on how other people could have perceived your post. And, of course, lurking moar is good advice.

Replies from: Peacewise
comment by Peacewise · 2011-10-30T04:01:41.296Z · LW(p) · GW(p)

Fair enough Manfred, I respect your feeling of dislike on this position, but I disagree with its lack of rationality.

I did put in more than 60 seconds of effort trying to understand why it's a -1, and couldn't come up with something that didn't include my own bias. So I wanted to both understand what the -1 was for and test to see if my inclination is true or not. So far my bias is telling me it's an example of "have a go at the new guy on the block." - I hold only very lightly to this and will enjoy being proven incorrect by having the -1 explained.

It's commonly accepted that the most challenging time for a new group member is their beginning with the group and it's also known that constructive feedback helps with that challenge.

Does a member of rational group want to provide rational feedback? Observationally quite a few do not.

If I never (or rarely) question the -1, or never or rarely receive any more feedback that -1, then I will struggle, or may not ever understand what the -1 is for. I consider myself to be intellectually honest in asking "what's wrong with this?", because during the process of writing the post, I'm already asking "what's wrong with this?" and so perhaps someone with more knowledge gives me a -1 and I'd appreciate being informed "what's wrong with this.", for in being informed I can potentially implement better self editing procedures, that is I can improve my rationality.

Now if they don't have the time to answer that question, ok, I'll consider on my own what the -1 is for (again!) and then it's more likely I'll come up with an answer that has some amount of "reasoning" based upon my own biases. Now the sequences I've read so far imply that biases are something that people should attempt to perceive and challenge, so I believe that I am being consistent with the sites inclination towards rationality by asking the question, both to challenge and improve my own understanding and do the same for the person who has given me a -1, indeed also for those who witness the exchange also.

Replies from: lessdazed
comment by lessdazed · 2011-10-30T04:29:04.817Z · LW(p) · GW(p)

couldn't come up with something that didn't include my own bias.

What does this mean? That you couldn't come up with something that didn't include the other person's being stupid or innately evil?

Does a member of rational group want to provide rational feedback?

Oh, my word!

the -1

That does not represent a systematic negative reaction to your post or even consensus disagreement.

Replies from: Peacewise
comment by Peacewise · 2011-10-30T06:39:37.455Z · LW(p) · GW(p)

I'll work on how to get quotes up on this site, till then...

Lessdazed asks "What does this mean? That you couldn't come up with something that didn't include the other person's being stupid or innately evil?" I've already answered what it means, see the post you reply to.

Thanks for the link "oh, my word!"

"That does not represent a systematic negative reaction to your post or even consensus disagreement" I agree. Each -1 represents only a single persons negative reaction to a post.

comment by lessdazed · 2011-10-30T05:16:59.033Z · LW(p) · GW(p)

I think that asking why a comment was downvoted would be legitimate even more than two times in a row, were the downvoted comments downvoted more than once.

For comments that were only downvoted once, it is not usually a question worth asking. So I agree with the literal reading of the original "don't ask this for every comment of yours that is downvoted" more than the clarification.

comment by wedrifid · 2011-10-30T03:43:41.504Z · LW(p) · GW(p)

I really can't figure that out myself. The comment doesn't seem to be annoying, irrelevant, rude or stupid. (Dan Gilbert's is wrong all the same.)

comment by lessdazed · 2011-10-30T04:15:49.613Z · LW(p) · GW(p)

It's probably because Gilbert conflates happiness and utility.

comment by Unnamed · 2011-10-30T05:39:32.331Z · LW(p) · GW(p)

I didn't downvote, but since the post is mostly just a link to a video my guess is that it's somebody signaling that the video isn't worth watching.

When the main content of a comment is a link, votes get used to indicate whether the link is worth following. This is especially relevant when the link is to a video, which involves a large time commitment as it is not skimmable. If the content of the video doesn't justify the time commitment, then downvotes tell other readers not to waste their time on the link (and warn the poster not to waste people's time with such links).

Replies from: lessdazed, Peacewise
comment by lessdazed · 2011-10-30T06:10:22.200Z · LW(p) · GW(p)

In my opinion, it's worth watching as a best presentation of a wrong idea that doesn't attempt to engage the correct one. It's also worth watching because it compiles interesting true facts and merely draws wrong conclusions from them, though correct conclusions would also be interesting and some of his intermediate conclusions are fine.

comment by Peacewise · 2011-10-30T06:31:07.750Z · LW(p) · GW(p)

Thanks Unnamed for giving some time to my request. Your guess is as good as mine.

The anonymity of the -1 function permits someone to spend about 2 seconds expressing their disapproval of the post, it facilitates expression of disapproval without facilitating expressing how and why the disapproval exists.

It provides little to nothing towards the sites aim : "The Less Wrong community aims to gain expertise in how human brains think and decide, so that we can do so more successfully. "

I'm being consistent with that aim by asking "what's the -1 for please?" -i.e. I ask them how they came to their conclusion, I seek to gain knowledge about how they think, so that I can do so more successfully.

The -1 function stands against some of the virtues of rationality as stated on the website. There is no humility in giving a -1, instead its arrogant. There is no argument in -1, instead providing a way to avoid argument, whilst still expressing disapproval.

Giving a -1 doesn't have a sense of perfectionism in that it doesn't encourage the person giving the -1 to hold themselves to as high a standard as they can - instead it encourages the person to go with their gut instinct and just hit that button -1!

Replies from: KPier, Grognor
comment by KPier · 2011-10-30T06:45:12.701Z · LW(p) · GW(p)

I think most people on LessWrong are reluctant to downvote, and very few people do it thoughtlessly. That said, getting downvoted happens, and I think you are too concerned by it. Downvoting simply means "I would like fewer comments like this." When you get several downvotes (three or more, I'd say), you should take this signal seriously; when it's a single downvote, I'd really let it go.

You should also check out, if you haven't yet, Eliezer's explanation for encouraging people to downvote.

comment by Grognor · 2011-10-30T06:46:25.230Z · LW(p) · GW(p)

I downvoted your comment, and here is why.

Less Wrong is, compared to the rest of the internet, extremely troll-free. But they still show up, you see, and when they do, the only acceptable response is to vote down. Not to reply, just to vote down.

Not every comment is worth seeing. Those that aren't worth seeing definitely need to be downvoted. But that's not the only reason to downvote a comment.

I downvote when:

  • a comment is just a joke and is not a gutbuster (inappropriate for Less Wrong)
  • someone on the losing side of an argument isn't even trying to update his beliefs
  • a comment not an attempt at new insight (can be either useless praise or useless contradiction)
  • a user is posting for the sake of karma
  • a comment is employing false premises (as yours is just now)
  • a comment is annoying (you again)

I hope I helped, and I hope you, when downvoted in the future, think about why first before asking for an explanation. It happens to everyone, and there's a good reason it happens to everyone: nobody makes good comments every single time he says something. Nobody!

I think on Less Wrong there is a very strong tendency to give away karma much too freely, which encourages the act of posting with the partial motivation of gaining karma. Even if it's not often the entire motivation, for that reason I wish that personal karma totals were not readily available information.

One must also be wary of too much pacifism. In that post, Eliezer describes beliefs I've grokked long before ever discovering Less Wrong.

Edit: On second thought, I'm retracting that downvote, because you tried to provide new insight.

Replies from: Peacewise, NancyLebovitz, MarkusRamikin
comment by Peacewise · 2011-10-30T06:57:00.634Z · LW(p) · GW(p)

Thanks for giving time to somewhat reveal your thinking.

Would my thinking be correct if I decide you downvoted the Dan Gilbert TED talk post because the comment is annoying?

My thinking is...

It's not a joke, I wasn't losing an argument, I was attempting to provide new insight in that Dan Gilbert has something worthwhile to say on the topic, I'm not posting for the sake of karma - but how can you guess my intentions on that anyways? The premise (that synthetic happiness is on topic and Gilbert has something to say on this) is clearly open to interpretation and I reveal my un-sureness of that openly.

That leaves only "it's annoying".

Replies from: Grognor
comment by Grognor · 2011-10-30T07:00:40.039Z · LW(p) · GW(p)

Would my thinking be correct if I decide you downvoted the Dan Gilbert TED talk post because the comment is annoying?

No, that wasn't me.

Also, there was an error in my previous comment, caused by the lack of the "http://" symbols preceding a url. Apparently, this causes hyperlinks to break and delete everything in between them. You should reread the last two paragraphs of it.

comment by NancyLebovitz · 2011-10-30T07:42:00.990Z · LW(p) · GW(p)

I hope I helped, and I hope you, when downvoted in the future, think about why first before asking for an explanation. It happens to everyone, and there's a good reason it happens to everyone: nobody makes good comments every single time he says something. Nobody!

Do you think all downvotes are good downvotes?

Replies from: Grognor
comment by Grognor · 2012-04-07T19:49:54.405Z · LW(p) · GW(p)

No, but I suspect that a higher proportion of downvotes are good ones than upvotes.

Replies from: steven0461
comment by steven0461 · 2012-04-07T20:51:32.449Z · LW(p) · GW(p)

I downvote more than most commenters, but I'm worried about demotivating people and impeding the growth of the community. Do you think the good/bad feelings people get from an upvote/downvote are more relative to other people's karma (in which case there should be no net demotivating effect), or do you think they're more independent of other people's karma?

I agree with you that LW hands out too much free karma on average, but my main gripe is that it's bad at valuing different subjects. In particular, Harry Potter comments are overvalued relative to meaty comments about subjects like singularity strategy.

Another overvalued category of comments is those that many people agree with, but that aren't worth incentivizing because they would have been made anyway.

Two points where I disagree with you on what to downvote are very short comments (I would like to see more of them; they tend to be dense in content) and what you call "signals of pseudo-modesty" (which I think often are genuine and useful expressions of incomplete confidence that improve the tone of the discussion).

Replies from: Grognor
comment by Grognor · 2012-04-07T21:55:48.071Z · LW(p) · GW(p)

Do you think the good/bad feelings people get from an upvote/downvote are more relative to other people's karma [...], or do you think they're more independent of other people's karma?

I have no one's experience to go on but my own, but here was mine: when I first started commenting on LW, I was nervous and afraid of downvotes and I didn't care at all about other people's karma; I just wanted to not look terribly stupid. Now that I have been here for a long time and made probably many more comments than I should have (for my health), the main things that bother me are when a downvote is unexpected or when I make a comment and a reply is contradicting me and that reply has more karma than the comment I made. And other times when my comments seem to me to be just as good as other comments in the thread, but get nowhere near as much karma. And also, I'm not entirely sure if we want to encourage the type of person who is hypersensitive to downvotes to come here. My gut says we do, but it also says we relax our desire to not have bad comments too much relative to this type of person's actual value. Complex question.

In particular, Harry Potter comments

I know! If total karma were hidden, though, it wouldn't as much of a problem. People could just weight against the inflation in 'fun' threads like that. (Ever notice how fun-type threads inflate karma?) I also agree with you that posts in Main should have 3x karma instead of 10x. (I was pretty shocked when I found out it was 10x after writing this; if it had gotten the same score (though it wouldn't have)...)

Two points where I disagree with you on what to downvote are very short comments

This is one place where you and I just seem to have different preferences. Maybe it's because you have an easier time understanding dense material, but I seem to require more words. As an unfair analogy, compare reading the Twelve Virtues of Rationality to the entire Sequences. Many lessons from the latter are in the former, but it's really hard to get them out of the former without the latter. I made a modest effort at this and actually surprised myself at how much I got right, but most people do not do this.

and what you call "signals of pseudo-modesty"

I have since changed my opinion of this slightly; since underconfidence is a rare enough sin I usually respond to it with a private message with inquiry rather than a downvote (note also that I mentioned this in the context of thinking a comment is looking for karma; I acknowledge how low the base rate is on this[!]).

Edit: Generally, karma has gotten massively inflated lately. Looking at both Top Comments sections, nearly all of the first 600 are from the last five months.

Replies from: steven0461, steven0461
comment by steven0461 · 2012-04-07T22:46:19.863Z · LW(p) · GW(p)

I wonder what would happen if we split the downvote button into "I'm voting you down but please don't feel bad" and "I'm voting you down and you should feel bad".

comment by steven0461 · 2012-04-14T03:14:24.593Z · LW(p) · GW(p)

As of right now, 9 out of the top 10 discussion comments for last week are in the Harry Potter threads.

Replies from: Desrtopa
comment by Desrtopa · 2012-04-14T03:54:53.210Z · LW(p) · GW(p)

And for all that I enjoy discussing HPMoR, I can't say I feel that this deserves to be my most upvoted comment ever.

comment by MarkusRamikin · 2011-10-30T08:40:19.443Z · LW(p) · GW(p)

a user is posting for the sake of karma

I'm curious, if you don't mind elaborating, what sort of posts do you have in mind? It's a little contradictory to me because to get karma, you need to post something people will find worth upvoting...

Replies from: Grognor
comment by Grognor · 2011-10-30T12:49:58.283Z · LW(p) · GW(p)

It goes along a little bit with joke comments, if you follow. I can't for-sure know what the motivations behind any given post are (surely some motivation is to receive karma even on good comments), but strong warning signs that a comment was posted mostly to receive karma include, roughly in order of strong indicators to weak indicators:

  • comment is very short
  • comment includes an emoticon
  • comment is intended as humor
  • comment that expresses reasons for having a belief that everyone at Less Wrong already has
  • comment has little actual content compared to number of words
  • (related to previous point) comment has no content except agreeing/disagreeing
  • comment has signals of pseudo-modesty such as, "perhaps", "maybe", "I think", "it seems", "possibly", etc.
  • comment is pure speculation
  • comment is made after previous ones, where an edit to a previous comment would have been appropriate (users cannot upvote a single comment multiple times, but multiple comments by a single author are fair play)
  • comment mentions karma
  • comment speaks in passive voice

Multiple items on this list prime me for down-voting behavior.

Replies from: MarkusRamikin, army1987
comment by MarkusRamikin · 2011-11-01T12:43:46.050Z · LW(p) · GW(p)

Thanks and upvoted. Since reading this subthread (and that post you linked to) I've noted a significant increase in my willingness to downvote, and it's partly because I started noticing more of what you're talking about.

Although... I'm not in any important disagreement with you, but I'd rather make it clear that I don't think there's anything shameful about wanting and enjoying karma. After all, the point of karma is that it's supposed to motivate people, else why have a karma system? It's more that, regardless of what motivated a poster to write it, a post with no content (or otherwise not worth seeing) is a bad thing. All the other symptoms you mention just make me pay closer attention to whether a post has meaningful content.

When I put myself in the shoes of someone criticised for making one of those posts, I think it'd feel more fair to be told what was objectively wrong with the post itself, than to just be accused of karma-whoring; what could you possibly say to that, even if the accusation were in error?

comment by A1987dM (army1987) · 2012-04-07T20:43:20.816Z · LW(p) · GW(p)

comment is made after previous ones, where an edit to a previous comment would have been appropriate (users cannot upvote a single comment multiple times, but multiple comments by a single author are fair play)

Editing an old comment won't show up in the Recent Comments either, so by posting a new comment more people will read it.

comment by J_Thomas · 2007-09-15T20:01:21.000Z · LW(p) · GW(p)

Unless the meme that {buying lottery tickets is a good idea} is beneficial for those that hold it, we should not expect it to become prevalent even it if benefits the species.

But it is prevalent. And on average people lose money at it, while the occasionaly winners tend not to do well.

So it's natural to suppose that the meme for buying lottery tickets is a perversion of some other functional meme.

Here's a way that lotteries could be functional after all for people in extended families. If you sacrifice and save and start to build up a little capital, you may be accosted by distant relatives in need who have the right to your assistance, and it all drains away. But when you win the lottery you can go live in some distant place and only share enough to stay in distant good standing. When building up savings is considered immoral, paying 40% on average might not seem so bad for a chance to get some capital anyway.

comment by James_Bach · 2007-09-16T01:36:47.000Z · LW(p) · GW(p)

"Evolution does not operate on species. It operates on individuals. Genes that are statistically bad for individuals drop out of the gene pool no matter what they do for the species."

Imagine a gene that caused 9/10 of the humans who have it to be twice as fertility and attractiveness as the population that did not have it, while 1/10 of the humans who have it can't reproduce at all. This would be a gene that would serve the species (i.e. the portion of the species that had it), even though it would harm some individuals. Notice that the inability of the 10% to procreate would not harm the prospects of such a gene for the species as a whole. Soon, the whole of the species would have this gene.

Isn't there some theorizing that suggests that homosexuality may be an example of something like this? Perhaps the phenomenon of homosexuality is linked to some wonderful benefit that increases the viability of heterosexuals. Otherwise, wouldn't homosexuals have been "selected out" long ago?

comment by razib · 2007-09-16T05:44:40.000Z · LW(p) · GW(p)

Imagine a gene that caused 9/10 of the humans who have it to be twice as fertility and attractiveness as the population that did not have it, while 1/10 of the humans who have it can't reproduce at all.

this is means that the allele (genetic variant) increase fitness by a factor of 1.8. this is not a "species level" benefit in anything but a tautological way. higher levels of selection or dynamic processes are only interesting if they can not be reduced down to a lower level. e.g., you can increase the fitness of the group by simply increasing the fitness of individuals which compose the group. this increases the fitness of the group, but it is easily reduced toward increasing the fitness of individuals. in other cases you can not decompose the group fitness to individuals and so there is grounds for saying that the excess fitness which is gained by having a group, or evaluating a group, is something that is "for the good of the group."

to use a sports analogy, if you brought together an all-star team you'd get a better team, not because of the team dynamics but because the individual players are so much better. in contrast, ther are teams which are very good because of group dynamics where utility players can specialize in their roles and synergistically perform far better than they might as individuals.

comment by razib · 2007-09-16T05:49:56.000Z · LW(p) · GW(p)

This is an ancient and thoroughly discredited idea. See George Williams's "Adaptation and Natural Selection."

i am generally skeptical of group selectionist arguments, but we are probably on the cusp of a renaissance in this area. it will be spearheaded by e.o. wilson, who has always been a "believer," but who now believes that group selection (or at least multi-level selection) has the empirical and analytical firepower to make a comeback. i am cautiously skeptical, but in the interests of honesty i think that "ancient and thoroughly discredited" is probably a better description for group selection circa 1995 than 2007. most evolutionary biologists are probably pretty skeptical of group selectionist arguments, but in large part it is because the models presented (which tend to avoid the pitfalls of the earlier arguments) are hard to test and seem analytically intractable beyond the simplest formulations.

comment by razib2 · 2007-09-16T06:06:51.000Z · LW(p) · GW(p)

Imagine a gene that caused 9/10 of the humans who have it to be twice as fertility and attractiveness as the population that did not have it, while 1/10 of the humans who have it can't reproduce at all.

btw. you don't have to imagine. sickle cell is like this. a proportion of the population gets increased benefit from having the gene, and a proportion gets decreased benefit, in the ratio of heterozygotes (those who carry one sickle cell allele and one normal) and homozogytes (those who carry two alleles), i.e., 2pq:q^2. that's not species selection, it's standard balancing selection upon one gene.

comment by Doug_S. · 2007-09-16T06:12:25.000Z · LW(p) · GW(p)

What's wrong with group selection? All you need is for the benefit to the individual of being in a group in which trait X is sufficiently common to be sufficiently bigger than the benefit of not having trait X in the individual... or am I confused?

comment by Michael_Rooney · 2007-09-16T07:25:15.000Z · LW(p) · GW(p)

You know, self-deception has attracted some inquiry already.

comment by g · 2007-09-16T09:41:50.000Z · LW(p) · GW(p)

Doug, what's wrong with group selection is mostly that selection at the individual level works so much faster. If something's harmful to individuals, it's likely to have been wiped out by individual-level selection before it gets the chance to help the group.

It's possible to concoct scenarios where group-level effects win. For instance: some allele has no effect at all when heterozygous, but when homozygous it causes its bearer to become astonishingly altruistic. By the time there's much incidence of homozygosity in any given community, the chances are that the allele is (heterozygously) quite common, and then it's possible that the individual's altruism does more net good than harm to bearers of the allele. This is kin selection rather than group selection really, but on a different scale from the usual.

Or: some allele has a very slight deleterious effect on individual fitness in general-- slight enough that it typically takes, say, 100 generations before natural selection becomes visible over genetic drift. If it then has some group-level effect that prevents rare but group-destroying incidents (say, once every 100 generations someone without it will go nuts and kill everyone around them) then it could be selected for on balance simply because groups where it doesn't happen to get fixed in the population tend to die. Note that making this work is rather dependent on group size.

But it's pretty hard to concoct such scenarios that are actually plausible, and pretty hard to argue that anything in the real world looks much like them.

comment by J_Thomas · 2007-09-16T13:21:00.000Z · LW(p) · GW(p)

Doug S, G has given a good explanation (except possibly the last sentence which is debatable.) I'll explain again: Selection happens when genes increase in frequency compared to other genes. Since genes always happen inside individuals, a gene that causes its individuals to leave fewer offspring in the population will be selected against, regardless of what it does for the population as a whole.

A gene that results in good stuff for the population but that doesn't result in its own carriers increasing more than others won't increase in the population even though all the individuals in the population would be better off.

You can get by this a little by assuming a population split into breeding groups with limited outbreeding, where a gene that improves the group enough can take over in a small group, and then when the group gets bigger it splits and both groups increase compared to other groups, etc. But too much outbreeding would stop it. Something like this may happen in rats and fruit flies etc, too soon to be sure.

There could be specific genetic mechanisms that provide a system to to create group selection. Diploidy is a peculiar genetic mechanism, as are sexuality and dominance. There could be others that are less obvious, that benefit the populations that let them operate, and group selection is one of the things they might promote. But that's entirely speculative at this point.

comment by TGGP4 · 2007-09-16T21:19:59.000Z · LW(p) · GW(p)

James Bach, if something has a frequency above 1% and has high fitness costs to those that hold it, it is probably pathogenic rather than genetic. You can find more on that from Greg Cochran at the bottom of this page.

comment by Raw_Power · 2010-10-12T16:32:21.238Z · LW(p) · GW(p)

You know, back in the old days, before I jumped on the Lesswrong train, I would say I willed myself into believing in God. Because whether he existed or not didn't change empirical conclusions: the world could have been created five minutes ago, intelligent design could have happened etc. etc.

But doing that made me angst and feel uneasy and there was something nagging at me. You know, those beliefs hurt, but it hurt even harder to get them out. I fought every inch for them. But when I lost, it was a relief, it felt like I had won.

comment by Carinthium · 2010-11-14T03:44:27.925Z · LW(p) · GW(p)

As far as I can tell, the weakness of the article is that it assumes one is deciding for oneself. One could decide to help others become irrational (on some issues) if you rationally decide it is best for them.

Replies from: ata
comment by ata · 2010-11-14T05:24:25.970Z · LW(p) · GW(p)

That's an even worse idea.

Replies from: Carinthium
comment by Carinthium · 2010-11-14T05:28:03.444Z · LW(p) · GW(p)

It is if we accept the premise that it is best to be rational- I was pointing out that the article doesn't refute the argument it claims to.

Replies from: ata
comment by ata · 2010-11-14T05:39:52.349Z · LW(p) · GW(p)

I was referring more to the problems with other-optimizing. I'd estimate that it would be pretty dangerous to grant yourself permission to decide what delusions to instill in other people for their own good.

Replies from: Carinthium, Peacewise
comment by Carinthium · 2010-11-14T05:47:15.758Z · LW(p) · GW(p)

Perhaps it's slightly overstating the case to claim that it is best to encourage delusions in others (I didn't know about the article, but having read it it appears accurate), but it is at least true that if (hypothetically) rationality turns out to generally be a net loss one could persuade other rationalists to prevent it spreading and decide not to spread it oneself.

(Or alternately try and prevent it spreading to those likely to lose from it)

comment by Peacewise · 2011-10-29T11:02:28.360Z · LW(p) · GW(p)

Indeed, some believe that Hitler was hysterically blind during world war I, that during his therapy he was told that he couldn't be blind because he had a manifest destiny for greatness an "auto-suggestion", this cured his hysterical blindness and changed his life. Something like that anyways... look where that went.

Sorry can't quote the source right now, I'll see if I can dig it up. http://www.scielo.br/pdf/anp/v68n5/v68n5a32.pdf

Replies from: Peacewise
comment by Peacewise · 2011-10-30T00:13:31.101Z · LW(p) · GW(p)

What's the -1 for please?

Replies from: wedrifid
comment by wedrifid · 2011-10-30T03:40:31.943Z · LW(p) · GW(p)

I suspect because the theory is basically loopy.

Replies from: Peacewise
comment by Peacewise · 2011-11-27T04:06:54.669Z · LW(p) · GW(p)

Which theory is "basically loopy"?

That Hitler was hysterically blind? See - Eur Arch Psychiatry Clin Neurosci (2007) 257:245 DOI 10.1007/s00406-006-0648-4 supported by... http://www.dredmundforster.info/1-edmund-forster-adolf-hitler supported by... "The letters made available to him were exchanged between two prominent American physicians and confirm that Hitler was treated for hysterical amblyiopia, the psychiatric or conversion disorder commonly known as hysterical blindness" - http://www.abdn.ac.uk/news/archive-details-10772.php

or that he was treated for the hysterical blindness with hypnosis? (2004, October 13). Fuhrer doctor not a shrinking violet. MX (Melbourne, Australia) (1 - Melbourne ed.), 010. Retrieved November 26, 2011, from NewsBank on-line database (Australia's Newspapers) supported by... http://books.google.com.au/books?id=TzG26VVP8BMC&pg=PA98&lpg=PA98&dq=Hitler+'found+blind+faith'&source=bl&ots=zsTz8pOFWH&sig=IqpXzj6KjXsP0J7ytQEfaoQ3n6o&hl=en&ei=drLRTqTYK-aoiAeOibXcDg&sa=X&oi=book_result&ct=result&resnum=2&ved=0CCQQ6AEwAQ#v=onepage&q=Hitler%20'found%20blind%20faith'&f=false or Have a look at the literature yourself, and when you struggle to find primary sources, please do keep in mind that Hitler actively sought to destroy those sources and actively sought the death of the eye witnesses also.

Anyways, regardless of the truth of Hitler's hysterical blindness and his treatment by his Psychiatrist of instilling a delusion, the concept is supportive of ata's point that... "I'd estimate that it would be pretty dangerous to grant yourself permission to decide what delusions to instill in other people for their own good."

Sorry to bring up such an extreme example in support of ata's point, but I honestly believe that instilling a delusion in another is indeed a very dangerous thing to do, and hence granting oneself permission to do so is also dangerous.

One should also consider Hitler's instilling of a mass delusion upon the German people, the propaganda-delusion that the Jews were "sub human" also made it possible for the Holocaust to occur, in this consideration it is revealed that an instilled delusion facilitated the death of 6 million Jews.

One might also consider other examples of instilled delusion by propaganda as also being dangerous, for example fundamentalist Muslim propaganda that "all Americans are evil doers", or Christian propaganda that "all Muslims are fundamentalists", and how these delusions create a world of distrust, hatred and war between two cultures that for the main part laud peace.

Replies from: Zetetic
comment by Zetetic · 2012-01-04T10:21:39.495Z · LW(p) · GW(p)

Anyways, regardless of the truth of Hitler's hysterical blindness and his treatment by his Psychiatrist of instilling a delusion, the concept is supportive of ata's point that... "I'd estimate that it would be pretty dangerous to grant yourself permission to decide what delusions to instill in other people for their own good."

You're aware that this basically endorses generalization from fictional evidence?

Replies from: Peacewise
comment by Peacewise · 2012-01-04T16:47:31.332Z · LW(p) · GW(p)

Thanks Zetetic for the link to generalization from fictional evidence.

I'm aware that after spending my hard earned free time providing more evidence than I cared to in support of showing Hitler was hysterically blind and did receive treatment of an autosuggestion (i.e instilled delusion), that the "theory" remained basically "loopy" to someone(s) who didn't care to provide any refutation what so ever.

Since the evidence was treated without respect and I couldn't be arsed arguing with someone obviously lacking inclination to discuss the facts accuracy, I moved onto the more interesting conversational point of instilled delusion by propaganda.

If you would care to look more closely at the situation, you'll find that my post doesn't endorse "generalization from fictional evidence" because the evidence presented isn't fictional.

However if you still want to consider that my words you've quoted in bold as "basically endorses generalization from fictional evidence", that's your call.

I'm still struggling to understand when strict argument and/or more conversational discussion are appropriate on this website. Kind of amusing, in a frustrating way. To present historical evidence in support of a reasonable point about psychology, have it trolled (imo), yet decide to give the benefit of the doubt and present more detail, then move on and 2 months later be hit with counterargument for moving past the troll bait and staying on the original topic.

Replies from: Zetetic
comment by Zetetic · 2012-01-05T00:28:43.889Z · LW(p) · GW(p)

I'm still struggling to understand when strict argument and/or more conversational discussion are appropriate on this website. Kind of amusing, in a frustrating way. To present historical evidence in support of a reasonable point about psychology, have it trolled (imo), yet decide to give the benefit of the doubt and present more detail, then move on and 2 months later be hit with counterargument for moving past the troll bait and staying on the original topic.

I'll try to help you out.

I think the standards of evidence are the highest here of any place on the internet save for maybe some professional groups, and I for one would like to keep it that way. As far as the "generalization from fictional evidence" - that wasn't because the evidence you presented was fictional, it was because you said that it didn't matter whether the event actually happened - the concept is sufficient to provide evidence of the more general point. That is, per definition, an endorsement of generalization from fictional evidence.

As far as the evidence you did provide:

To be as specific as I can, there are a lot of issues with the sites you linked to - I tend to expect some sort of peer reviewed meta-analysis of the existing evidence that is well organized and hopefully somewhat up to date, and none of the sources really meet muster. The first one has no bibliography at all. The second is largely a book review with a couple of claims about new evidence, a particularly relevant quote from the author of said book (in the link):

“Hitler himself claimed that the war ended for him when he had to spend weeks in an army hospital after having been blinded by mustard gas. Circumstantial evidence and hearsay, however, have led to the suggestion that Hitler was, in fact, suffering from and treated for psychosomatic blindness. This hypothesis could never be conclusively tested, as Hitler had his medical file destroyed and had his henchmen kill those people with knowledge of the file,” said Dr Weber

That isn't something to use in support of your argument. It's not very good supporting evidence. The last source might be a good first person account or it might be a terrible exaggeration. I would much prefer an analysis from an actual historian with less personal bias because my background knowledge is insufficient for assessing the credibility of the source and what is being said.

Now, if you want to support ata's point, that:

I'd estimate that it would be pretty dangerous to grant yourself permission to decide what delusions to instill in other people for their own good.

Here's some of what I would like to see:

First, find some articles establishing that you can create such an elaborate delusion in a clinical setting - preferably using techniques that would have been available during the time period. To that effect I found a few articles, unfortunately behind paywalls, but the abstracts look promising in that they indicate the feasibility of imparting specific delusion. Unfortunately they're all pretty modern results and the "imparted delusions" (which as far as I can tell aren't totally established as true delusions, though they may be) all mirror actual delusions that might be normally encountered. The delusion supposedly imparted to Hitler was fairly detailed and unusual - I don't know if this is a feasible to impart (if it is feasible at all) as compared to these more mundane cases.

The further claim that Hitler's doctor imparted a delusion lasting for decades that was fairly intricate and cured his Conversion disorder (hysterical blindness) requires an awful lot of evidence - mostly because, beyond the lack of historical evidence, the physical possibility of this is in question. It doesn't look to me like there is much (if any) evidence supporting the feasibility of such a feat of hypnotic suggestion and the links you provided do nothing in the way of establishing otherwise.

If you or anyone else has some strong evidence of the feasibility of imparting a robust, long-term delusion using hypnosis I'd be glad to consider it, but I don't see why I should accept the possibility given that I can't seem to find any evidence for it outside this (possibly false) Hitler anecdote.

I do hope this has been helpful for you, this site still has a very steep learning curve (although I think it has loosened up a bit lately) and community expectations aren't immediately obvious to newcomers. We aren't (as far as I can tell) trying to troll you - we just hold very high expectations for a post that makes a (highly controversial) factual claim. Of course, you might not view the claim as highly controversial, but unless you have some further evidence of the physical possibility of this sort of intricate, long-term hypnosis, it seems like this community might have a somewhat more stringent standard of evidence than you're used to.

Replies from: Peacewise
comment by Peacewise · 2012-01-11T14:05:58.437Z · LW(p) · GW(p)

Thanks Zetetic for giving your time for an in depth reply, much appreciated.

With regards to your request for a peer reviewed meta analysis of the existing evidence. Well I reckon you'll find that in Dr David Lewis book, "The Man Who Invented Hitler". A synopsis of which is provided as the first link posted.

http://www.dredmundforster.info/1-edmund-forster-adolf-hitler

At that link you will find in the "about" section that the author Dr Lewis is a reputable author, with suitable qualifications to discuss the issue of Hitler and hysterical blindness.

"French born Dr David Lewis, a neuropsychologist, best selling author and historical researcher, obtained his doctorate in experimental psychology at the University of Sussex. He later lectured there before quitting to become a full time research and author. He has written widely on the psychology of totalitarianism especially in relation to the rise of Adolf Hitler and National Socialism with articles appearing in such publications as International History and The Criminologist." - the first paragraph at the "about"

http://www.dredmundforster.info/about-dr-david-lewis

This is supported on wikipedia. http://en.wikipedia.org/wiki/David_Lewis_(psychologist)

Now fair enough, I personally haven't done the meta analysis and haven't presented one done by another - however I have provided the conclusions of research done on the subject by a respectable source.

Since you've requested more information, of a better quality, please have a look through this.

"It is known that Forster treated Hitler with auto-suggestion which allowed Hitler, on November 19th, 1918, a week after the end of the War, to be fully recovered, discharged, and returned to his regiment in Munich2,4." http://www.scielo.br/scielo.php?pid=S0004-282X2010000500032&script=sci_arttext

which has a bibliography that uses the aforementioned Dr Lewis as a reference. I include references 2, 3, 4 fyi, from the last link above.

"2. Gramary A. The internment of Adolf Hitler at the Hospital of Pasewalk, a case of hysterical blindness? Mental Health 2008;11:47-50. [ Links ]

  1. Dr. Edmund Forster the man who invented Hitler: the making of the Führer. Available at http://www.dredmundforster.info/1-edmund-forster-adolf-hitler (accessed 12/19/2009). [ Links ]
  2. Köpf G. The hysterical blindness of Adolf Hitler: history of a medical. Rev Psiq Clín 2006;33:218-224. [ Links ]"

Now this journal article is particularly interesting for it provides evidence that supports my belief that Dr Lewis does consider the veracity of Hitlers hysterical blindness as Dr Lewis is used as a source for both Hitler being Hysterically blind and arguments against Hitlers hysterical blindness. I would presume that since Dr Lewis considers both sides, yet is holding that Hitler was hysterically blind that Dr Lewis does indeed provide some form of meta analysis of the situation in his Book “The man who invented Hitler” – a review of which was linked.

Now onto the second link... http://www.abdn.ac.uk/news/archive-details-10772.php

Quite right that is a book review. It's a review of a book authored by Dr Thomas Weber MSt., DPhil (Oxon), FRHistS. Lecturer in Modern European, International, and Global Political History & Director, Centre for Global Security and Governance, also Reader in History and Director of the Centre for Global Security and Governance at the University of Aberdeen. Dr Weber also seems to me like another respectable source on the subject in question.

The book in question I presume will also provide you with a bibliography and likely more information than either you or I care to examine for ourselves. I put it to you that Dr Weber is a respectable source, that his account supports Dr Lewis on the issue of Hitlers hysterical blindess and the use of autosuggestion as a treatment.

Further you have quoted the following as evidence for the 2nd link in question being inadequate ;

“Hitler himself claimed that the war ended for him when he had to spend weeks in an army hospital after having been blinded by mustard gas. Circumstantial evidence and hearsay, however, have led to the suggestion that Hitler was, in fact, suffering from and treated for psychosomatic blindness. This hypothesis could never be conclusively tested, as Hitler had his medical file destroyed and had his henchmen kill those people with knowledge of the file,” said Dr Weber

However perhaps in your scanning of the 2nd link you did not read the paragraph that follows the above quote. I include it fyi.

The letters made available to him (Dr Weber) were exchanged between two prominent American physicians and confirm that Hitler was treated for hysterical amblyiopia, the psychiatric or conversion disorder commonly known as hysterical blindness. This previously unseen evidence is included in the paperback version of Hitler’s First War, due out on October 13.

I put it to you that the link is indeed "very good supporting evidence"!

Now onto what Hitler himself said about the occasion...

In Mein Kampf (which most scholars agree cannot be taken as completely factual), Hitler (1925/1999) reports that on the evening of October 13, 1918, gas shells rained on them “all night more or less violently. As early as midnight, a number of us passed out, a few of our comrades forever. Toward morning I, too, was seized with pain which grew worse with every quarter hour, and at seven in the morning I stumbled and tottered back with burning eyes; taking with me my last report of the war. A few hours later, my eyes had turned into glowing coals; it had grown dark around me” (p. 202). During the next month, Hitler stated that the piercing pain in his eyes had diminished and that he could now perceive broad outlines of objects around him. He wrote that he began to believe that he would recover his eyesight well enough to work again but not well enough to be able to draw again. On November 10, Hitler reported that a pastor came to the hospital to announce that Germany would capitulate and that the German fatherland would thus be exposed to “dire oppression.” Hitler reported, “Again everything went black before my eyes; I tottered and groped my way back to the dormitory, threw myself on my bunk, and dug my burning head into my blanket and pillow” (p. 204). copy pasted from http://vanilla47.com/Adolf%20Hitler%20Mein%20Kampf/Understanding%20Madmen%20A%20DSM-IV%20Assessment%20of%20Adolf%20Hitler%20Individual%20Differences%20Research%202007,%20Vol.%205,%20No.%201%20pp.%2030-43.pdf

Mein Kampf, aka Hitler himself, supports that Hitler certainly did suffer blindness during the time period in question. Secondly of note Hitler wrote that "again everything went black before my eyes" upon receiving news of Germany's surrender, revealing that he was indeed not blinded by mustard gas, but instead suffered mentally to such an extent it affect his vision. Also that Hitler was in hospital at the time the Pastor gave the news revealed that he indeed was in hospital and for blindness.

Are we there yet? Have I provided enough evidence for LW to remove those -1's and start placing them instead upon the "loopy" comment that obviously did far less research on the matter than myself? Probably not, newbies, especially outspoken newbs, are always treated more harshly than long timers, that's just the way of things. Observationally it seems quite a few members of LW for all their support of rationality are prone to the bias that is known as :

group-serving bias - explaining away outgroup member’ positive behaviours; also attributing negative behaviours to their dispositions (while excusing such behaviour by one’s own group). (Myers, D. Social Psychology 10th ed. 2010)

comment by Carinthium · 2010-11-14T03:44:29.316Z · LW(p) · GW(p)

As far as I can tell, the weakness of the article is that it assumes one is deciding for oneself. One could decide to help others become irrational (on some issues) if you rationally decide it is best for them.

comment by [deleted] · 2010-12-08T03:21:07.472Z · LW(p) · GW(p)

I'm through with truth.

I never had a scientific intuition. In college, I once saw a physics demonstration with a cathode ray tube -- moving a magnet bent the beam of light that showed the path of the electrons. I had never seen electrons before and it occurred to me that I had never really believed in the equations in my physics book; I knew they were the right answers to give on tests, but I wouldn't have expected to see them work.

I'm also missing the ability to estimate. Draw a line on a sheet of paper; put a dot where 75% is. Then check if you got it right. I always get that sort of thing wrong. Arithmetic estimation is even harder. Deciding how to bet in a betting game? Next to impossible.

Whatever mechanism is that matches theory to reality, mine doesn't work very well. Whatever mechanism derives expectations about the world from probability numbers, mine hardly works at all. This is why I actually can double-think. I can see an idea as logical without believing in it.

A literate person cannot look at a sentence without reading it. But a small child, just learning to read, can look at letters on a page without reading, and has to make an extra effort to read them. In the same way, a bad rationalist can see that an idea is true, without believing it. I can read about electromagnetism and still not expect to see the beam in the cathode ray tube bend. I spent ten years or so thinking "Isn't it odd that the best arguments are on the atheist side?" without once wondering whether I should be an atheist.

Should I break down that barrier? I'm not sure. I'd do it if it would allow me to make money, I think. But not if it came at the cost of some kind of screaming Cthulhu horror.

You know what I really wish I had? Team spirit. Absolute group loyalty. Faith. Patriotism. The sense of being in the right. In Hoc Signo Vinces. I have fleeting glimpses of it but it doesn't last. I want it enough that I keep fantasizing about joining the Army because it might work. I always wanted to be a fanatic, and my brain would never do it. But I'm starting to wonder if that's hackable; I'm sure enough sleep deprivation and ritual would do it.

Replies from: wnoise, JoshuaZ, jimrandomh, wedrifid, jimrandomh, shokwave, TheOtherDave, David_Gerard
comment by wnoise · 2010-12-08T03:30:56.020Z · LW(p) · GW(p)

Why would you expect it to come at the cost of some kind of screaming Cthulhu horror?

Replies from: None
comment by [deleted] · 2010-12-08T03:34:48.679Z · LW(p) · GW(p)

I'm not sure. It's just that if it did I wouldn't go for it.

I know one person who's really well calibrated with probability, due to a lot of practice with poker and finance. When something actually is an x% probability, he actually internalizes it -- he really expects it to happen x% of the time. He's 80% likely to be right about something if he says he has an 80% confidence.

He doesn't seem too bad off. Busy and stressed, yes, but not particularly sad. Cheerful, even.

comment by JoshuaZ · 2010-12-08T03:42:37.532Z · LW(p) · GW(p)

I'm also missing the ability to estimate. Draw a line on a sheet of paper; put a dot where 75% is. Then check if you got it right. I always get that sort of thing wrong. Arithmetic estimation is even harder. Deciding how to bet in a betting game? Next to impossible.

Whatever mechanism is that matches theory to reality, mine doesn't work very well. Whatever mechanism derives expectations about the world from probability numbers, mine hardly works at all. This is why I actually can double-think. I can see an idea as logical without believing in it.

Congratulations. You're just like most humans.

Replies from: None
comment by [deleted] · 2010-12-08T04:06:47.857Z · LW(p) · GW(p)

Well, then why does he say self-delusion is impossible? It's not only possible, it's usual.

Replies from: JoshuaZ, RobinZ
comment by JoshuaZ · 2010-12-08T04:21:08.571Z · LW(p) · GW(p)

I wasn't talking about that aspect (although I think he's wrong there also) but just about the aspect of not doing a good job at things like estimating or mapping probabilities to reality.

Replies from: None
comment by [deleted] · 2010-12-08T04:29:40.434Z · LW(p) · GW(p)

I think it's really the same thing. Mapping probabilities to reality is sort of the quantitative version of matching degree of belief to amount of evidence.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-12-08T05:01:55.003Z · LW(p) · GW(p)

Possibly taboo self-delusion? I'm not sure that's what he means. Self-delusion in this context seems to mean something closer to deliberately modifying your confidence in a way that isn't based on evidence.

comment by RobinZ · 2010-12-08T22:07:35.269Z · LW(p) · GW(p)

I am under the impression that much of Eliezer Yudkowsky's early sequence posts were writted based on (a) theory and (b) experience with general-artificial-intelligence Internet posters. It's entirely possible that his is a correct deduction only on that weird WEIRD group.

comment by jimrandomh · 2010-12-08T04:19:28.410Z · LW(p) · GW(p)

I never had a scientific intuition. In college, I once saw a physics demonstration with a cathode ray tube -- moving a magnet bent the beam of light that showed the path of the electrons. I had never seen electrons before and it occurred to me that I had never really believed in the equations in my physics book; I knew they were the right answers to give on tests, but I wouldn't have expected to see them work.

Intuitively connecting mathy physics to reality isn't the default; you need to watch demonstrations and conduct thought experiments to make those connections. Your intuition got better that day.

comment by wedrifid · 2010-12-08T04:21:33.678Z · LW(p) · GW(p)

Draw a line on a sheet of paper; put a dot where 75% is. Then check if you got it right.

I tried that one and got it just about spot on. If you had asked me to estimate 67% now that may have been tricky. Estimating half twice in your head is kind of easy.

Replies from: RobinZ
comment by RobinZ · 2010-12-08T22:10:19.114Z · LW(p) · GW(p)

If you had asked me to estimate 67% now that may have been tricky.

Move your estimation point until half the big side is the same as the little side. (Although I've practiced enough to do halves, thirds, and fifths pretty well, so I might just be overgeneralizing my experience.)

Replies from: wedrifid
comment by wedrifid · 2010-12-08T22:20:55.421Z · LW(p) · GW(p)

Move your estimation point until half the big side is the same as the little side. (Although I've practiced enough to do halves, thirds, and fifths pretty well, so I might just be overgeneralizing my experience.)

Damn. I chose two random numbers and made a probability out of them. It seems I picked one of the easy ones too! :)

And yes, that algorithm does seem to work well for thirds. I lose a fair bit of accuracy but it isn't down to 'default human estimation mode' level.

comment by jimrandomh · 2010-12-08T04:53:12.205Z · LW(p) · GW(p)

Should I break down that barrier? I'm not sure. I'd do it if it would allow me to make money, I think. But not if it came at the cost of some kind of screaming Cthulhu horror.

You know what I really wish I had? Team spirit. Absolute group loyalty. Faith. Patriotism. The sense of being in the right. In Hoc Signo Vinces. I have fleeting glimpses of it but it doesn't last. I want it enough that I keep fantasizing about joining the Army because it might work. I always wanted to be a fanatic, and my brain would never do it. But I'm starting to wonder if that's hackable; I'm sure enough sleep deprivation and ritual would do it.

Absolute group loyalty is much more likely to lead you to a screaming Cthulhu horror than the pursuit of truth is. Especially if it comes from a combination of ritual and sleep deprivation.

Replies from: None
comment by [deleted] · 2010-12-08T05:21:16.509Z · LW(p) · GW(p)

Ok, worth thinking about.

I still want it. At times I really want victory, not just a normal life. Even though "normal" is all a person should really expect.

comment by shokwave · 2010-12-08T05:53:42.635Z · LW(p) · GW(p)

Should I break down that barrier? I'm not sure. I'd do it if it would allow me to make money, I think. But not if it came at the cost of some kind of screaming Cthulhu horror.

Not to other-optimise, but yes.

As far as I can tell, the chances of encountering a true idea that is also a Lovecraftian cosmic horror is below the vanishing point for human brains. (There aren't neurons small enough to accurately reflect the tiny chances, etc)

It will also help you make money. Example: I received a promotion for demonstrating my ability to make more efficient rosters. This ability came from googling "scheduling problem" and looking at some common solutions, recognising that GRASP-type (page 7) solutions were effective and probably human-brain-computable - and then when I tried rostering, I intuitively implemented a pseduo-GRASP method.

That "intuitively implemented" bit is really important. You might not realise how much you rely on your intuition to decide for you, but it's a lot. It sounds like taking a lot of theory and jamming it into your intuition is the hard part for you.

Tangentially, how do you feel about the wisdom of age and the value of experience in making decisions?

Replies from: None
comment by [deleted] · 2010-12-08T12:58:48.756Z · LW(p) · GW(p)

I think wisdom and experience are pretty good things -- not sure how that relates though.

And "screaming Cthulhu horror" was just a cute phrase -- I don't literally believe in Lovecraft. I just mean "if rationality results in extreme misery, I'll take a pass."

Replies from: shokwave
comment by shokwave · 2010-12-08T15:10:45.370Z · LW(p) · GW(p)

I think wisdom and experience are pretty good things -- not sure how that relates though.

Some people I have encountered struggle with my rationality because I often privilege general laws derived from decision theory and statistics over my own personal experience - like playing tit-for-tat when my gut is screaming defection rock, or participating in mutual fantasising about lottery wins but refusing to buy 'even one' lottery ticket. I have found that certain attitudes towards experience and age-wisdom can affect a person's ability to tag ideas with 'true in the real world' - that reason and logic can only achieve 'true but not actually applicable in the real world'. It was a possibility I thought I should check.

And "screaming Cthulhu horror" was just a cute phrase -- I don't literally believe in Lovecraft.

I assumed it was a reference to concepts like Roko's idea. As for regular extreme misery, yes, there is a case for rationality being negative. You would probably need some irrational beliefs (that you refuse to rationally examine) that prevent you from taking paths where rationality produces misery. You could probably get a half-decent picture of what paths these might be from questioning LessWrong about it, but that only reduces the chance - still a consideration.

comment by TheOtherDave · 2010-12-08T14:31:49.906Z · LW(p) · GW(p)

You talk about belief the way popular culture talks about love: as some kind of external influence that overcomes your resistance.

And belief can be like that, sure. But belief can also be the result of doing the necessary work.

I realize that's an uncomfortable idea. But it's also an important one.

Relatedly, my own thoughts on the value of truth: when the environment is very forgiving and even suboptimal choices mostly work out to my benefit, the cost of being incorrect a lot is mostly opportunity cost. That is, things go OK, and even get better sometimes. (Not as much better as they would have gotten had I optimized more, but still: better.)

I've spent most of my life in a forgiving environment, which makes it very easy to adopt the attitude that having accurate beliefs isn't particularly important. I can go through life giving up lots of opportunities, and if I just don't think too much about the improvements I'm giving up I'll still be relatively content. It's emotionally easy to discount possible future benefits.

Even if I do have transient moments of awareness of how much better it can be, I can suppress them by thinking about all the ways it can be worse and how much safer I am right where I am, as though refusing to climb somehow protected me from falling.

The thing is: when the environment is risky and most things cost me, the cost of being incorrect is loss. That is, things don't go OK, and they get worse. And I can't control the environment.

It's emotionally harder to discount possible future losses.

Replies from: None
comment by [deleted] · 2010-12-08T15:03:49.742Z · LW(p) · GW(p)

I was always under the impression that a sort of "work" can lead you to emotionally believe things that you already know to be true in principle. I suspect that a lot of practice in actually believing what you know will eventually cause the gap between knowing and believing to disappear. (Sort of the way that practice in reading eventually produces a person who can't look at a sentence without reading it.)

For example, I imagine that if you played some kind of betting game every day and made an effort to be realistic, you would stop expecting that wishing really hard for low-probability events could help you win. Your intuition/subconscious would eventually sync up with what you know to be true.

Replies from: TheOtherDave, shokwave
comment by TheOtherDave · 2010-12-08T15:16:21.218Z · LW(p) · GW(p)

(nods) That's been my experience.

Similarly: acting on the basis of what I believe, even if my emotions aren't fully aligned with those beliefs (for example, doing things I believe are valuable even if they scare me, or avoiding things I believe are risky even if they feel really enticing), can often cause my emotions to change over time.

But even if my emotions don't change, my beliefs and my behavior still do, and that has effects.

This is particularly relevant for beliefs that are strongly associated with things like group memberships, such as in the atheism example you mention.

comment by shokwave · 2010-12-08T15:30:11.688Z · LW(p) · GW(p)

I was always under the impression that a sort of "work" can lead you to emotionally believe things that you already know to be true in principle.

I strongly associate this with Eliezer's description of the brain as a cognitive engine, that needs to a certain amount of thermodynamical work to arrive at a certainty level - and that reasoned and logical conclusions that you 'know' fail to produce belief (enough certainty to act on knowledge) because they don't make your brain do enough work.

I imagine that forcing someone to deduce bits of probability math from earlier principles and observations, then have them use it to analyze betting games until they can generalise to concepts like expected value, would be enough work to have them believe probability theory.

comment by David_Gerard · 2011-01-19T09:11:51.116Z · LW(p) · GW(p)

Should I break down that barrier? I'm not sure. I'd do it if it would allow me to make money, I think. But not if it came at the cost of some kind of screaming Cthulhu horror.

This sounds like worrying about tripping over a conceptual basilisk. They really are remarkably rare unless your brain is actually dysfunctional or you've induced a susceptibility in yourself. Despite the popularity of the motif of harmful sensation in fiction, I know of pretty much no examples.

comment by MoreOn · 2010-12-12T20:01:04.680Z · LW(p) · GW(p)

Overestimating my driving skills is obviously bad. But how about this scenario of the possibility of happiness destroyed by the truth?

Suppose, on the final day of exams, on the last exam, you think you’ve done poorly. In fact, you only got 1 in 10 questions completely right. On the other 9, you hope you’d get at least a bit of partial credit. On the other hand, all 4 of your friends (in the class of 50) think they’ve done poorly. Maybe there will be a curve? In fact, if the final exam curve is good enough, you might even get an A for the course.

The grade goes online at 6 PM. It’s already there, and it won’t change.

So what do you do? This is the last grade of the semester, and no more exams to study for. A bad grade will make you unhappy for the rest of the evening (you wanted to go to that party, right? You won’t have much fun thinking about that grade). A good grade will make you happy, but so what? Happiness comes with diminishing marginal returns (and for me it’s more like a binary value, happy or not). You have a higher expected utility for tonight if you don’t check your grade. And you’re not any worse off checking the grade tomorrow.

Should you destroy all that expected utility by the truth? (For reference, the truth is a that you got a C-, which is BAD).


My “solution” to this problem (probably irrational?) is in the spirit of “The other way is closed.” I look.

To maximize utility, I shouldn’t look at the grade until tomorrow morning. Some people don’t. I haven’t, once, and it didn’t bother me too much that I haven’t. And after bad grades, the outcome was usually pretty much as expected. So I know my utility function. That’s not the reason.

This is like the two-box decision of Newcomb’s problem. Rationally (according to Eliezer) you would pick one box. I’m not rational. I pick two. What’s there, is already there.

I. JUST. CAN’T. NOT. LOOK.

Replies from: Jordan, Desrtopa, wnoise
comment by Jordan · 2010-12-12T20:14:34.352Z · LW(p) · GW(p)

Sometimes I come up with an awesome idea for my research, something that seems like it will totally blow open the problem I've been working on for weeks/months/years. After having such amazing moments of insight I usually take a couple of days off because the potential that the idea is right just feels so good, and because, well, in research it usually turns out that most amazing insights don't solve that problem you've been working on for years.

Replies from: MoreOn
comment by MoreOn · 2010-12-12T21:41:35.688Z · LW(p) · GW(p)

I know what you mean. I get that all the time, with all of the unsolved math problems I occasionally look at. And since my name isn't on wikipedia yet, I haven't solved any of them.

Although, in this case I would argue that we're better off knowing we're wrong, than being happy for the wrong reasons. The happiness at an end-of-semester party comes from a different source (socializing, having fun, etc), which are, dare I say, the "right" reasons. Destroying this happiness by the truth will not lead to the discovery of more truth, as it were (the grade is already there). Destroying the happiness over a mistake at least lets you find truth in acknowledging such mistake.

But then again, if I have a "brilliant" idea, I start working on it immediately, without giving myself much of a chance to bask in its brilliance.

comment by Desrtopa · 2010-12-12T23:07:34.724Z · LW(p) · GW(p)

So what do you do? This is the last grade of the semester, and no more exams to study for. A bad grade will make you unhappy for the rest of the evening (you wanted to go to that party, right? You won’t have much fun thinking about that grade). A good grade will make you happy, but so what? Happiness comes with diminishing marginal returns (and for me it’s more like a binary value, happy or not). You have a higher expected utility for tonight if you don’t check your grade. And you’re not any worse off checking the grade tomorrow.

Should you destroy all that expected utility by the truth? (For reference, the truth is a that you got a C-, which is BAD).

I would think that an ideal rationalist's mental state would be dependent on their prior determination of their most likely grade, and on average actually looking at it should not tend to revise that assessment upwards or downwards.

In practice, I think that all but the most optimistic humans would tend to imagine a grade worse than they probably received until shown otherwise, so looking at the grade would tend to revise your happiness state upwards.

Replies from: JoshuaZ, MoreOn
comment by JoshuaZ · 2010-12-12T23:18:33.946Z · LW(p) · GW(p)

In practice, I think that all but the most optimistic humans would tend to imagine a grade worse than they probably received until shown otherwise, so looking at the grade would tend to revise your happiness state upwards.

The Dunning-Kruger effect suggests that people on average will be too optimistic about grades.

Replies from: Desrtopa
comment by Desrtopa · 2010-12-12T23:29:13.873Z · LW(p) · GW(p)

Depending on their degree of competence. People who are actually competent tend to underestimate themselves. Perhaps I've simply developed an unrepresentative impression by associating more with people who are generally competent.

comment by MoreOn · 2010-12-13T00:37:32.749Z · LW(p) · GW(p)

I would think that an ideal rationalist's mental state would be dependent on their prior determination of their most likely grade, and on average actually looking at it should not tend to revise that assessment upwards or downwards.

Suppose I estimate the probability of a good curve at roughly p=5/50=10%. If there's a curve, I'll get an A (utility value 4); else C- (utility value 1.7). Suppose then I need the minimum utility of 2 to enjoy the party (utility 0.2).

My expected utility from not checking the grade is 0.1 x 4 + 0.9 x 1.7 + 0.2 = 2.13. My actual utility once I'd checked the grade is 1.7 + 0.2 = 1.9.

If this expected utility estimate is good, then I should be happy in proportion to it (although I might as well acknowledge now that I failed to account for the difference between expected utility and the utility of the expected outcome, thus assuming that I'm risk-neutral).

Replies from: Desrtopa
comment by Desrtopa · 2010-12-13T01:07:55.364Z · LW(p) · GW(p)

Rather than there being a discrete point above which you will be able to enjoy the party and below which you will not, I would expect the amount you enjoy the party to vary according to the grade you got, unless the cutoff point is due to some additional consequence of scoring below that grade which will be accompanied by an additional utility hit. Your prior expected utility would incorporate the chance of taking that additional hit times the likelihood of it occurring.

Anyway, in any specific case, your utility may go up or down by checking your grade, but if you have a perfectly accurate assessment of the probability distribution for your grade, then on average your expected utility should be the same whether you check or not.

In this case, the fact that we know the actual grade stands to be misleading, since it's liable to make any probability distribution that doesn't provide an average expected grade of 1.7 look wrong, even though that might not be the average predicted by the available data.

Replies from: MoreOn
comment by MoreOn · 2010-12-14T05:14:06.343Z · LW(p) · GW(p)

I considered your point at length. To address your comment, I could use the ignorance hypothesis on my old model, assigning equal probability values to everything between 1.7 and 4.0. Discrete if need be. I could use a binary output value as "enjoying the party," 1 or 0. I could do lots of other tweaks.

But the problem here is, everything comes down to whether this model (or any other 5-minute model) is good enough to explain my non-rationalist gut feeling, especially without an experiment. And, you know, I'm not about to fail an easy exam in a couple of days just to see what my utility function would do.

Replies from: Desrtopa
comment by Desrtopa · 2010-12-14T06:58:17.118Z · LW(p) · GW(p)

Conservation of expected evidence means that ideally, you can't expect the introduction of new evidence to affect your expected utility. In practice, that's probably not the case, but humans aren't even rough approximations of ideal rationalists.

comment by wnoise · 2010-12-13T06:46:04.894Z · LW(p) · GW(p)

I would be happier knowing the grade is bad, rather than not knowing at all. Knowing leaves me free to enjoy the party, rather than worry about it and be distracted at the party.

comment by buybuydandavis · 2011-10-29T08:05:24.338Z · LW(p) · GW(p)

This is the peculiar blindness of rationalists. Everywhere you look, you can see people denying reality, and yet rationalists talk like it can't be done.

Winston, after being tortured, eventually could see 5 fingers where there were only 4. Most people are much more malleable than that. They already had a preference for believing what they're told to believe. You can see it everywhere you look.

Even if you ignore the daily evidence of your senses, just as a matter of the evolutionary pressure of centuries of ideological terror and executions, shouldn't we expect independents minds to have gone the way of the dodo?

Is it impossible for us to change our spots? I don't know. Maybe. Maybe not. Maybe we just aren't rational enough yet, still fetishistically clinging to our mania for epistemic rationality, ignoring the tradeoffs we make to instrumental rationality, which is where the rubber meets the road.

Second order rationality implies...

No it doesn't. As long as one includes instrumental rationality in the mix, it implies nothing of the sort. Instrumental rationality is what achieves your values. If you're really committed to winning, not just toeing the line on epistemic rationality, you can and will probably change. Your mind can calculate a lot more than what you can bat around in your head self consciously.

Winston's heart sank. That was doublethink. He had a feeling of deadly helplessness. If he could have been certain that O'Brien was lying, it would not have seemed to matter. But it was perfectly possible that O'Brien had really forgotten the photograph. And if so, then already he would have forgotten his denial of remembering it, and forgotten the act of forgetting. How could one be sure that it was simple trickery? Perhaps that lunatic dislocation in the mind could really happen: that was the thought that defeated him.

That isn't the thought that defeats me, nor is it the thought that defeats Orwell. The horror is that Double Think may win, and may already be winning.

I find it grotesque, but the universes seems blithely unconcerned about my preferences.

comment by Acidmind · 2012-08-20T10:38:24.798Z · LW(p) · GW(p)

Quite the contrary: Alcohol.

Replies from: ancientcampus
comment by ancientcampus · 2012-08-21T23:08:34.614Z · LW(p) · GW(p)

I want to upvote this thing so hard.

comment by chaosmosis · 2012-08-24T22:51:10.670Z · LW(p) · GW(p)

Since this is the featured article thingy of today I'm commenting, maybe someone will see this and want to engage this argument or agree and make sure that the smart guys in lab coats see it.

By the time you realize you have a choice, there is no choice. You cannot unsee what you see. The other way is closed.

For now. But once we control neuroscience really well, this entire can of worms gets opened up again. Perhaps Brave New World would be a more appropriate dystopia to reference than 1984, because in that world they actually DO believe what the government wants them to, because they're so well controlled by the sleep hypnosis and conditioning they receive.

So we'll need a different and better solution eventually. And, also, this solution needs to deal with infinite regress too.

comment by [deleted] · 2012-12-18T20:29:36.201Z · LW(p) · GW(p)

"You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived."

As far as I've got this happiness thing figured out, you're it when you believe you are, and you're not when you believe you're not. There is, in fact, not necessarily a correlation between how happy a person should be, and how happy they feel. Feelings don't have to correspond to reality. One can consciously choose to no longer be bothered by something and just be happy instead. For me at least, with a little effort, it works. And it can be validated just as easily, just by stating that my sole utility function is to be happy. The human brain is a lotus-eater machine.

comment by James_Miller · 2012-12-20T18:38:31.673Z · LW(p) · GW(p)

The 30-year-old me who was terrified of death would have given belief in an afterlife as an exception to this. The 45-year-old me is a member of cryonics provider Alcor.

comment by Ritalin · 2013-05-31T01:00:20.287Z · LW(p) · GW(p)

Well, when you have "Homosexuals in the Basement", and the Nazi officer rings at your door, you had better make yourself believe you don't have them. What is, precisely, the difference between this deep-immersion roleplay, and genuine self-delusion, for all practical purposes? This is not a rhethorical question.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-09-02T14:31:00.002Z · LW(p) · GW(p)

Make yourself believe strongly enough that you will invite the officer to check your basement?

Replies from: Ritalin
comment by Ritalin · 2013-09-06T07:24:13.745Z · LW(p) · GW(p)

Just to be practical, it is better to make yourself believe that you have something embarassing in your basement, such as, say, a (straight) porn stash or undeclared valuables or any other embarassing property that would make the officer sigh and roll his eyes on account of having bigger fish to fry.

comment by Kawoomba · 2013-08-06T18:37:40.127Z · LW(p) · GW(p)

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen. Second-order rationality implies that at some point, you will think to yourself, "And now, I will irrationally believe that I will win the lottery, in order to make myself happy." But we do not have such direct control over our beliefs.

We routinely generate a swath of irrational beliefs, spawned e.g. by deep seated biological biases such as "That girl I just met, she is so special, I will love and cherish her forever and ever." You notice a belief like that makes you happy, then you only do a cursory examination of some worst case boundaries. If you then judge the belief to be mostly harmless, you just do not look at it any closer.

Changing a belief consciously means reflecting on it. Stop the reflecting, and you stop the updating (and keep the happiness).

comment by SeanMCoincon · 2014-07-31T19:11:49.440Z · LW(p) · GW(p)

This immediately brings to mind the old adage about it being better to be Socrates dissatisfied than a pig satisfied. I'd imagine, from the pig's point of view, that the loftiest height of piggy happiness was not terribly dissimilar from the baseline level of piggy contentment, so equating "happiness" to "contentment" would not be an inexcusable breach of piggy logic. Indeed, we humans pretty much have to infer this state of affairs when considering animal wellbeing ("appearance of sociobiological contentment approximates happiness"), as we don't yet possess any means of engaging animals in philosophical conversation on the subject.

Yet it seems that those who would have us believe that "blissful ignorance" is a good thing as an absolute are confusing contentment with happiness unnecessarily. Happiness registers more as a positive, aspirational value within the context of the human experience range; contentment seems more a negative, absence-of-dissatisfaction value that indicates only that things aren't going poorly. Doublethink and willful ignorance do not seem to be able to positively provide qualia that contribute to happiness; they can only obscure knowledge of things that are actually going poorly, thus creating a false sense of contentment.

That's my general counterpoint whenever people speak positively of the "happiness" created by things like religion and opiates. Nothing is being added; your knowledge of reality is being obscured. It's difficult to see how that approach could be considered a mature option.

comment by Jackercrack · 2014-11-17T13:57:27.230Z · LW(p) · GW(p)

I had the happiness of stupidity once. While younger I edged into the valley and recoiled. I believe I even made a conscious choice and enforced it through various means. It was a good time over about two years, and it was unsustainable. I made the mistake of continuing to gain knowledge about human nature, I kept my curiosity and my fascination with how things worked and thus was my ignorance doomed. I dipped deep into the valley and eventually found this place, where I (hopefully) hit critical mass of bootstrap.

If I had stayed in that bubble of wilful ignorance I would probably be happier, but long term I think my current path will overtake it. It's better this way, I couldn't have stayed in that state without maiming my own curiosity and growth.

comment by Kimber1234 · 2015-04-18T20:01:27.744Z · LW(p) · GW(p)

Isn't love a "happiness of stupidity" to some degree? It defies rationality, and the odds of it lasting aren't good statistically. Should you believe in it, then?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-04-18T23:04:39.308Z · LW(p) · GW(p)

Isn't love a "happiness of stupidity" to some degree? It defies rationality, and the odds of it lasting aren't good statistically. Should you believe in it, then?

Love is a feeling, an experience. It is not an experience of something; it is simply an experience, like pain is, or the perception of colours. There is no propositional content to it. It is not something you can believe or disbelieve; it is not that sort of thing. There is nothing to believe in.

As with pain, there tend to be propositions associated with it, but not implied by it. Perhaps this toothache is a sign of tooth decay reaching a nerve, for which the remedy is to have dentist repair the tooth. Or perhaps it is an inflammation that a course of antibiotics will quell. There's no point in believing one or the other the moment you notice a toothache. Instead, you need to find out what has caused it, and then you can conclude on the appropriate action.

"Love is not love / Which alters when it alteration finds", writes the poet. This is an empirical test to assess one's love, not an injunction to cling to a fantasy about the beloved and shun disenchantment. True love is something to be discovered, not assumed. It is true if you observe that it continues while you get to know the beloved. It is a matter of noticing the feelings as they evolve, not of blinding oneself to the possibility that they will change.

"If this be error and upon me proved,
I never writ, nor no man ever loved."

comment by contravariant · 2015-05-29T03:46:39.261Z · LW(p) · GW(p)

While you can't fool your logical brain, if you want to have a false belief to make you happy, you don't need to anyway. The brain is compartmentalized and often doesn't update what you feel intuitively true, or what you base your actions on, just because you learned a fact. This sentence: "You can't know the consequences of being biased, until you have already debiased yourself" strikes me as most hard to believe. Reading about a bias and considering its consequences, esp. in an academic mindframe does NOT debias you. That requires applying it to your life and reasoning, recognizing when you are biased, sometimes even training and conditioning to change how you think. If after learning about a bias, I rationally decided that I want to keep it, I would just shelve it in my memory as academic trivia irrelevant to daily life, and I would stay just as biased as before in regards to what I do and how I feel.

comment by PeterCoin · 2015-08-16T23:55:21.372Z · LW(p) · GW(p)

The happiness of stupidity is not closed to me. By the time I've made 1 rational decision (by whatever metric one wants to use) I'll have made 100 irrational ones. Stupidity and irrationality is built into the very way I operate.

I am primarily composed stupid and irrational beliefs and I am continually creating more.

You don't choose to be irrational, that's the default position.

Rationality is a limited precious resource that you use to diagnose and fix problems within the irrational milieu of systems and subsystems that make up your mind.

Second order rationality would then seem to be more about avoiding wasting precious resources on things I will receive no gain and instead focus on using rationality to fix things that actually need fixing. If I spend 1 hour of rational thinking doing philosophy I'll feel a lot better than if I spend 1 hour questioning the intentions of my SO.

Replies from: CrimeThinker
comment by CrimeThinker · 2017-04-25T05:46:50.863Z · LW(p) · GW(p)

Generally agreed, doublethink is actually very easy and natural and is actually also probably the default state for human beings. In my experience, doublethink isn't so complicated as others seem to think, believing in two sides of one scale, but rather understanding a sort of multidimensional scale. Just as the teacher in Donnie Darko was made fun of for the idea of everything falling onto a love/fear scale, there's probably a good chance that scale is actually right and not wrong at all, but is only one axis of a plethora of emotional dimensions. This is part of why it is better to be MoreRight than LessWrong, though both are pretty neat. :)