Accuracy Versus Winning

post by John_Maxwell (John_Maxwell_IV) · 2009-04-02T04:47:37.156Z · LW · GW · Legacy · 77 comments

Consider the problem of an agent who is offered a chance to improve their epistemic rationality for a price.  What is such an agent's optimal strategy?

A complete answer to this problem would involve a mathematical model to estimate the expected increase in utility associated with having more correct beliefs.  I don't have a complete answer, but I'm pretty sure about one thing: From an instrumental rationalist's point of view, to always accept or always refuse such offers is downright irrational.

And now for the kicker: You might be such an agent.

One technique that humans can use to work towards epistemic rationality is to doubt themselves, since most people think they are above average in a wide variety of areas (and it's reasonable to assume that merit in at least some of these areas is normally distributed.)  But having a negative explanatory style, which is one way to doubt yourself, has been linked with sickness and depression.

And the inverse is also true.  Humans also seem to be rewarded for a certain set of beliefs: those that help them maintain a somewhat-good assessment of themselves.  Having an optimistic explanatory style (in an nutshell, explaining good events in a way that makes you feel good, and explaining bad events in a way that doesn't make you feel bad) has been linked with success in sports, sales and school.

If you're unswayed by my empirical arguments, here's a theoretical one.  If you're a human and you want to have correct beliefs, you must make a special effort to seek evidence that your beliefs are wrong.  One of our known defects is our tendency to stick with our beliefs for too long.  But if you do this successfully, you will become less certain and therefore less determined.

In some circumstances, it's good to be less determined.  But in others, it's not.  And to say that one should always look for disconfirming evidence, or that one should always avoid looking for disconfirming evidence, is idealogical according to the instrumental rationalist.

Who do you think is going to be more motivated to think about math: someone who feels it is their duty to become smarter, or a naive student who believes he or she has the answer to some mathematical problem and is only lacking a proof?

You rarely see a self-help book, entreprenuership guide, or personal development blog telling people how to be less confident.  But that's what an advocate of rationalism does.  The question is, do the benefits outweigh the costs?

77 comments

Comments sorted by top scores.

comment by Nick_Tarleton · 2009-04-03T02:19:17.992Z · LW(p) · GW(p)

If you're a human and you want to have correct beliefs, you must make a special effort to seek evidence that your beliefs are wrong. One of our known defects is our tendency to stick with our beliefs for too long. But if you do this successfully, you will become less certain and therefore less determined.

Normatively, seeking disconfirmation and not finding it should make you more certain. And if you do become less certain, I'm not convinced this necessarily makes you less determined – why couldn't it heighten your curiosity, or (especially if you have something to protect) make you more determined to try harder and return, with justification, to the same certainty?

Who do you think is going to be more motivated to think about math: someone who feels it is their duty to become smarter, or a naive student who believes he or she has the answer to some mathematical problem and is only lacking a proof?

Or, how about the student who believes they may have the answer, and has a burning itch to know whether this is the case? Or the one with something to protect?

You rarely see a self-help book, entreprenuership guide, or personal development blog telling people how to be less confident.

While I'm not very familiar with these literatures, I suspect encouraged overconfidence is often just a motivational hack, in which case you should again look for a Third Alternative: can you find the willpower to do this, while having properly calibrated belief in its success? Alternately, it might hack around people not realizing (even deliberatively) the potential payoff of success and/or the upside of failure; but then you should determine and appreciate these things.

Also, y'know, there could be some issues on which many people really are underconfident.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-04-03T05:12:59.805Z · LW(p) · GW(p)

Normatively, seeking disconfirmation and not finding it should make you more certain. And if you do become less certain, I'm not convinced this necessarily makes you less determined – why couldn't it heighten your curiosity, or (especially if you have something to protect) make you more determined to try harder and return, with justification, to the same certainty?

I'm beginning to suspect that providing theoretical justifications for inherently irrational humans is a waste of time. All I can say is that my empirical evidence still holds, and I observe this in myself. Believing that I'm going to fail demoralizes me. It doesn't energize me. I'd love to have my emotions wired the way you describe.

Or, how about the student who believes they may have the answer, and has a burning itch to know whether this is the case? Or the one with something to protect?

The point is that the student benefits from believing something regardless of its truth value. The greater the extent to which they believe their idea is valid, the more thinking about math they'll do.

While I'm not very familiar with these literatures, I suspect encouraged overconfidence is often just a motivational hack, in which case you should again look for a Third Alternative: can you find the willpower to do this, while having properly calibrated belief in its success? Alternately, it might hack around people not realizing (even deliberatively) the potential payoff of success and/or the upside of failure; but then you should determine and appreciate these things.

I'd love to hear your Third Alternative.

Also, y'know, there could be some issues on which many people really are underconfident.

It seems to happen occasionally.

comment by Richard_Kennaway · 2009-04-02T11:16:18.459Z · LW(p) · GW(p)

Where do you find in that link the suggestion that rationalists should be less confident?

One who sees that people generally overestimate themselves, and responds by downgrading their own self-confidence, imitates the outward form of the art without the substance.

One who seeks only to destroy their beliefs practices only half the art.

The rationalist is precisely as confident as the evidence warrants. But if he has too little evidence to vanquish his priors, he does not sit content with precisely calibrated ignorance. If the issue matters to him, he must seek more evidence and arrive at beliefs as accurate and as well-founded as he requires.

"Doubt is the beginning, not the end, of wisdom."

Who do you think is going to be more motivated to think about math: someone who feels it is their duty to become smarter, or a naive student who believes he or she has the answer to some mathematical problem and is only lacking a proof?

Someone who "feels" it is their "duty" to do something is someone who already does not want to do it, so by definition the second has motivation and the first does not. But these are imaginary people and this is fictional evidence. A real answer to the rhetorical question might be found by surveying mathematics students, comparing those who stay the course and those who drop out. I do not know what such a survey would find.

BTW, Googling "doubt quotes" turns up a lot of stuff encouraging people to doubt (as well as the quote above). "It aint what you don't know that kills you. It's what you know that aint so." “By doubting we come at truth.” “Doubt is not a pleasant condition, but certainty is absurd.” And so on.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-04-02T15:38:34.866Z · LW(p) · GW(p)

Where do you find in that link the suggestion that rationalists should be less confident?

"Beware lest you become attached to beliefs you may not want."

"Surrender to the truth as quickly as you can."

One who sees that people generally overestimate themselves, and responds by downgrading their own self-confidence, imitates the outward form of the art without the substance.

Not necessarily. If there is no known way to correct for a bias, it makes sense to do the sort of gross correction I described. For example, if I know that I and my coworker underestimate how long my projects take but I'm not aware of any technique I can use to improve my estimates, I could start by asking my co-worker to do all the estimates and then multiply that estimate by two when telling my boss.

Is there are known way of correcting for human overconfidence? If not, I think the sort of gross correction I describe makes sense from an epistemically rational point of view.

Someone who "feels" it is their "duty" to do something is someone who already does not want to do it, so by definition the second has motivation and the first does not. But these are imaginary people and this is fictional evidence. A real answer to the rhetorical question might be found by surveying mathematics students, comparing those who stay the course and those who drop out. I do not know what such a survey would find.

Do you deny that believing you had the answer to a mathematical problem and only lacked a proof would be a powerful motivator to think about mathematics? I was once in this situation, and it certainly motivated me.

Someone who "feels" it is their "duty" to do something is someone who already does not want to do it, so by definition the second has motivation and the first does not.

The way I used "duty" has nothing to do with disliking a thing. To me, "duty" describes something that you feel you ought to do. It's just an inconvenient fact about human psychology that telling yourself that something's best makes it hard to do it. Being epistemically rational (figuring out what the best thing to do is, then compelling yourself to do it) often seems not to work for humans.

Replies from: Richard_Kennaway, Nick_Tarleton
comment by Richard_Kennaway · 2009-04-02T21:26:01.059Z · LW(p) · GW(p)

If there is no known way to correct for a bias, the thing to do is to find one. Swerving an arbitrary amount in the right direction will not do -- reversed stupidity etc.

I once saw a poster in a chemist's shop bluntly asserting, "We all eat too much salt." What was I supposed to do about that? No matter how little salt I take in, or how far I reduce it, that poster would still be telling me the same thing. No, the thing to do, if I think it worth attending to, would be to find out my actual salt intake and what it should actually be. Then "surrender to the truth" and confidently do what the result of that enquiry tells me.

If someone finds it hard to do what they believe that they should and can, then their belief is mistaken, or at least incomplete. They have other reasons for not doing whatever it is, reasons that they are probably unaware of when they merely fret about what they ought to be doing. Compelling oneself is unnecessary when there is nothing to overcome. The root of indecision is conflict, not doubt; irrationality, not rationality.

Here's a quote about rationality in action from a short story recently mentioned on LW, a classic of SF that everyone with an interest in rationality should read. I find that a more convincing picture than one of supine doubt.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-04-02T23:07:27.740Z · LW(p) · GW(p)

Swerving an arbitrary amount in the right direction will not do -- reversed stupidity etc.

Reversing stupidity is not the same thing as swerving an arbitrary amount in the right direction. And the amount is not arbitrary: like most of my belief changes, it is based on my intuition. This post by Robin Hanson springs to mind; see the last sentence before the edit.

Anyway, some positive thoughts I have about myself are obviously unwarranted. I'm currently in the habit of immediately doubting spontaneous positive thoughts (because of what I've read about overconfidence), but I'm beginning to suspect that my habit is self-destructive.

If someone finds it hard to do what they believe that they should and can, then their belief is mistaken, or at least incomplete.

Well yes, of course, it's easier to do something if you believe you can. That's what I'm talking about--confidence (i.e. believing you can do something) is valuable. If there's no chance of the thing going wrong, then you're often best off being overconfident to attain this benefit. That's pretty much my point right there.

As for your Heinlein quote, I find it completely unrealistic. Either I am vastly overestimating myself as one of Heinlein's elite, I am a terrible judge of people because I put so many of them into his elite, or Heinlein is wrong. I find it ironic, however, that someone who read the quote would probably be pushed towards the state of mind I am advocating: I'm pretty sure 95% of those who read it put themselves somewhere in the upper echelons, and once in the upper echelons, they are free to estimate their ability highly and succeed as a result.

Replies from: Nick_Tarleton, Nick_Tarleton
comment by Nick_Tarleton · 2009-04-03T02:25:52.526Z · LW(p) · GW(p)

I'm currently in the habit of immediately doubting spontaneous positive thoughts (because of what I've read about overconfidence), but I'm beginning to suspect that my habit is self-destructive.

Are you in the habit of immediately doubting negative thoughts as well? All emotionally-laden spontaneous cognitive content should be suspect.

Also, when you correct an overly positive self-assessment, do you try to describe it as a growth opportunity? This violates no principles of rationality, and seems like it could mitigate the self-destruction. (See fixed vs. growth theories of intelligence.)

comment by Nick_Tarleton · 2009-04-03T02:25:39.774Z · LW(p) · GW(p)

I'm currently in the habit of immediately doubting spontaneous positive thoughts (because of what I've read about overconfidence), but I'm beginning to suspect that my habit is self-destructive.

Are you in the habit of immediately doubting negative thoughts as well? All emotionally-laden spontaneous cognitive content should be suspect.

Also, when you correct an overly positive self-assessment, do you try to describe it as a growth opportunity? This violates no principles of rationality, and seems like it could mitigate the self-destruction. (See fixed vs. growth theories of intelligence).

comment by Nick_Tarleton · 2009-04-03T02:03:05.188Z · LW(p) · GW(p)

"Beware lest you become attached to beliefs you may not want."

"Surrender to the truth as quickly as you can."

AFAICT, this means to seek disconfirming evidence, and update if and when you find it. Nothing to do with confidence.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-04-03T04:58:41.260Z · LW(p) · GW(p)

Disconfirming evidence makes you less confident that your original beliefs were true.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-04-04T07:21:07.829Z · LW(p) · GW(p)

If you find it; though this is nitpicking, as the net effect usually will be as you say. Still, this is completely different from the unconditional injunction to be less confident that the post suggests.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-02T20:26:10.817Z · LW(p) · GW(p)

How many kittens would you eat to gain 1 point of IQ?

Replies from: PhilGoetz, SoullessAutomaton, ialdabaoth, John_Maxwell_IV, Carinthium, thomblake, army1987
comment by PhilGoetz · 2009-04-02T20:32:27.095Z · LW(p) · GW(p)

I should eat them for free, since I already pay money to eat pigs.

comment by SoullessAutomaton · 2009-04-02T21:31:27.678Z · LW(p) · GW(p)

Assuming I don't have to kill and clean them myself, and that I am not emotionally attached to any of the animals in question:

If the value is not cumulative, the answer is likely zero, because of the social penalties of being known to eat animals categorized as "pets", "cute", and "babies". More than that, contingent on the ability to do so without public knowledge of such and depending on age; likely at most 200 or so, which assumes young animals and that I eat only the muscles and little else until finished (id est, the point at which the utility of a varied diet exceeds that of a point of IQ.)

If the value is cumulative with an expected gain of around one point a year, roughly an average of around two pounds of food per day, however many individual animals that works out to be, id est, the point at which the utility of not gaining excess weight exceeds that of gaining IQ, a value which may vary with time.

I suspect this comment will go a long way toward convincing others of the accuracy of the first word of my user name...

Replies from: dclayh
comment by dclayh · 2009-04-02T21:43:00.945Z · LW(p) · GW(p)

I suspect this comment will go a long way toward convincing others of the accuracy of the first word of my user name...

In this crowd? I don't see why.

Voluptatis avidus, Magis quam salutis; Mortuus in anima, Curam gero cutis.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-02T22:39:06.117Z · LW(p) · GW(p)

Voluptatis avidus, Magis quam salutis; Mortuus in anima, Curam gero cutis.

Oh, I do value virtue, to be sure; but I have gradually convinced myself to internalize the value of a moral calculus, and I accept that my judgments may not align with most people's instinctive emotional reactions.

comment by ialdabaoth · 2013-12-12T18:21:13.492Z · LW(p) · GW(p)

Given that 1 point of IQ is 1/15th of a standard deviation, a "point" of IQ isn't necessarily a consistent metric for cognitive function - depending on the shape of the actual population curve used to take the test, the actual performance delta between 125 and 130 may be VASTLY divergent from the performance delta between 145 and 150.

I think we need a different shorthand word for "quantified boost in cognitive performance" than "points of IQ". Does anyone have any ideas?

Replies from: Lumifer
comment by Lumifer · 2013-12-12T19:18:13.915Z · LW(p) · GW(p)

a "point" of IQ isn't necessarily a consistent metric for cognitive function

This implies you have another metric for cognitive function which an IQ point does not match. What is that another metric?

Replies from: ialdabaoth
comment by ialdabaoth · 2013-12-12T19:22:29.479Z · LW(p) · GW(p)

It implies no such thing; hence my asking for ideas rather than presenting them. The only thing we know for certain is that, due to how IQ tests are measured and calibrated, there is no particular reason why they SHOULD represent an actual, consistent metric - they merely note where on the bell curve of values you are, not what actual value that point on the bell curve represents. (At core, of course, it simply represents "number of questions on a particular IQ test that you got right", and everyone agrees that that metric is measuring SOMETHING about intelligence, but it would be nice to have a more formal metric for "smartness" that actually has real-world consequences.)

ETA: I certainly have an intuitive idea for what "smartness" would mean as an actual quantifiable thing, which seems to have something to do with pattern-recognition / signal-extraction performance across a wide range of noisy media. This makes some sense to me, since IQ tests - especially the ones that attempt to avoid linguistic bias - typically involve pattern-matching and similar signal extraction/prediction tasks. So intuitively, I think intelligence will have units of Entropy per [Kolmogorov complexity x time], and any unit which measures "one average 100 IQ human" worth of Smartness will have some ungodly constant-of-conversion comparable to Avogadro's number.

NOTE 2: Like I said, this is an intuitive sense, which I have not done ANY formal processing on.

Replies from: Lumifer
comment by Lumifer · 2013-12-12T19:39:10.635Z · LW(p) · GW(p)

Well, you need some framework. You said that IQ points are not "necessarily a consistent metric for cognitive function". First, what is "cognitive function" and how do you want to measure it? If you have no alternate metrics then how do you know IQ points are inconsistent and what do you compare them to?

everyone agrees that that metric is measuring SOMETHING about intelligence

The usual answer is that it is measuring the g factor, the unobserved general-intelligence capability. It was originally formulated as the first principal component of the results of a variety of IQ tests. It is quantifiable (by IQ points) and it does have real-world consequences.

units of Entropy per Kolmogorov complexity

I don't understand what that means.

Replies from: Nornagest, ialdabaoth
comment by Nornagest · 2013-12-12T20:06:00.656Z · LW(p) · GW(p)

Saying that IQ measures g is like saying that flow through a mountain creek measures snowmelt. More of one generally means more of the other, but there's a bunch of fiddly little details (maybe someone's airlifting water onto a forest fire upstream, or filling their swimming pool) that add up to a substantial deviation -- and there are still a lot of unanswered questions about the way they relate to each other.

In any case, g is more a statement about the correlations between domain skills than the causes of intelligence or the shape of the ability curve. The existence of a g factor tells you that you can probably teach music more easily to someone who's good at math, but it doesn't tell you what to look for in a CT scan, or whether working memory, say, will scale linearly or geometrically or in some other way with IQ; those are separate questions.

Replies from: Lumifer
comment by Lumifer · 2013-12-12T20:21:05.641Z · LW(p) · GW(p)

In any case, g is more a statement about the correlations between domain skills than the causes of intelligence or the shape of the ability curve.

g is an unobserved value, a scalar. It cannot say anything about "causes of intelligence" or shapes of curves. It doesn't aim to.

Replies from: Nornagest
comment by Nornagest · 2013-12-12T20:35:58.006Z · LW(p) · GW(p)

g was observed as a correlation between test scores. That is by definition a scalar value, but we don't know exactly how the underlying mechanism works or how it can be modeled; we just know that it's not very domain-specific. It's the underlying mechanism, not the correlation value, that I was referring to in the grandparent, and I'm pretty sure it's what ialdabaoth is referring to as well.

Replies from: Lumifer
comment by Lumifer · 2013-12-12T20:51:23.204Z · LW(p) · GW(p)

g was observed as a correlation between test scores.

To be more precise, the existence of g was derived from observing the correlation of test scores.

Moreover, g itself is not the correlation, it is the unobservable underlying factor which we assume to cause the correlation.

It is still a scalar-valued characteristic of a person, not a mechanism.

comment by ialdabaoth · 2013-12-12T19:51:02.804Z · LW(p) · GW(p)

The usual answer is that it is measuring the g factor, the unobserved general-intelligence capability. It was originally formulated as the first principal component of the results of a variety of IQ tests. It is quantifiable (by IQ points) and it does have real-world consequences.

Absolutely, but +n g doesn't necessarily mean +m IQ for all (n,m).

I don't understand what that means.

Here's a place where my intuition's going to struggle to formulate good words for this.

An intelligent system receives information (which has fundamental units of Entropy) and outputs a behavior. A "proper" quantitative measure of intelligence should be a simple function of how much Utility it can expect from its chosen behavior, on average, given an input with n bits of Entropy, and t seconds to crunch on those bits. Whether "Utility" is measured in units similar to Kolmogorov complexity is questionable, but that's what my naive intuition yanked out when grasping for units.

But the point is, whatever we actually choose to measure g in, the term "+1 g" should make sense, and should mean the same thing regardless of what our current g is. IQ, being merely a statistical fit onto a gaussian distribution, does NOT do that.

Replies from: Lumifer
comment by Lumifer · 2013-12-12T20:17:31.078Z · LW(p) · GW(p)

but +n g doesn't necessarily mean +m IQ for all (n,m)

This phrase implies that you have a metric for g (different from IQ points) because without it the expression "+n g" has no meaning.

An intelligent system receives information (which has fundamental units of Entropy) and outputs a behavior.

Okay. To be precise we are talking about Shannon entropy and these units are bits.

A "proper" quantitative measure of intelligence should be a simple function of how much Utility it can expect from its chosen behavior

Hold on. What is this Utility thing? I don't see how it fits in the context in which we are talking. You are now introducing things like goals and values. Kolmogorov complexity is a measure of complexity, what does it have to do with utility?

the term "+1 g" should make sense, and should mean the same thing regardless of what our current g is

I don't see this as obvious. Why?

IQ, being merely a statistical fit onto a gaussian distribution

Not so. IQ is a metric, presumably of g, that is rescaled so that the average IQ is 100. Rescaling isn't a particularly distorting operation to do. It is not fit onto a gaussian distribution.

Replies from: Nornagest
comment by Nornagest · 2013-12-12T20:40:04.841Z · LW(p) · GW(p)

IQ is a metric, presumably of g, that is rescaled so that the average IQ is 100. Rescaling isn't a particularly distorting operation to do. It is not fit onto a gaussian distribution.

I'm afraid you're mistaken here. IQ scores are generally derived from a set of raw test scores by fitting them to a normal distribution with mean 100 and SD of 15 (sometimes 16): IQ 70 is thus defined as a score two standard deviations below the mean. It's not a linear rescaling, unless the question pool just happens to give you a normal distribution of raw scores.

Replies from: Lumifer
comment by Lumifer · 2013-12-12T20:48:31.108Z · LW(p) · GW(p)

Hm. A quick look around finds this which says that raw scores are standardized by forcing them to the mean of 100 and the standard deviation of 15.

This is a linear transformation and it does not fit anything to a gaussian distribution.

Of course this is just stackexchange -- do you happen to have links to how "proper" IQ test are supposed to convert raw scores into IQ points?

Replies from: hyporational, army1987
comment by hyporational · 2013-12-13T01:23:30.159Z · LW(p) · GW(p)

do you happen to have links to how "proper" IQ test are supposed to convert raw scores into IQ points?

If the difficulty of the questions can't be properly quantified, what exactly do the raw scores tell you?

comment by A1987dM (army1987) · 2013-12-13T01:27:29.657Z · LW(p) · GW(p)

See the first sentence of the penultimate paragraph of this.

comment by John_Maxwell (John_Maxwell_IV) · 2009-04-02T20:57:37.393Z · LW(p) · GW(p)

Lots. This contradicts my revealed preference though I suppose, because I have a vague idea that fish oil increases intelligence but I haven't made a special effort to eat any.

I'm trying to anticipate how you'll follow up on this in a way that's relevant to my post and coming up blank.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-02T22:49:22.128Z · LW(p) · GW(p)

The fish oil thing is quackery I'm afraid

I find it hard to be properly scope sensitive about the kittens thing.

Replies from: PhilGoetz, magfrump
comment by PhilGoetz · 2009-04-04T05:27:36.931Z · LW(p) · GW(p)

"Scope sensitive"?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-04T13:41:36.199Z · LW(p) · GW(p)

referring to "scope insensitivity".

I care a lot more about the boundary between eating kittens and not eating kittens than the number of kittens I eat, so the gain I'd need to eat two kittens is less than twice the gain I'd need to eat one kitten. Which indicates that I'm more concerned for myself than for the kittens...

comment by magfrump · 2011-11-23T03:06:59.255Z · LW(p) · GW(p)

I've seen other discussions here regarding varieties of Omega-3s which strongly indicated that fatty acids from fish are used to build brain related cells and that these acids aren't really available in any other foods; casual googling fails to turn anything up but the link you provided seems like the sort of site that might, for instance, dismiss cryonics as quackery so I would like to see further discussion from someone better at researching than I am.

comment by Carinthium · 2010-11-13T03:22:10.447Z · LW(p) · GW(p)

Assuming I somehow found a way to counteract taste-related problems, more then 10. Why value the life of a kitten?

EDIT: And given my social situation as autistic, I could get around the resulting problems without too much in the way of trouble.

comment by thomblake · 2009-04-03T17:31:31.852Z · LW(p) · GW(p)

How many kittens would you eat to gain 1 point of IQ?

Has this comment really gone entirely without explanation and still been upvoted multiple times? How is this remotely relevant to the post?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-04-03T18:14:19.989Z · LW(p) · GW(p)

The post is about a tradeoff between epistemic rationality and instrumental rationality: you shouldn't invest too much effort in precise knowledge, and in some circumstances, humans may find themselves at a disadvantage because of knowing more. The same clash appears in the metaphor where you trade the achievement of goals (not wanting to eat kittens) for precision of knowledge (gaining IQ points).

Replies from: thomblake
comment by thomblake · 2009-04-03T18:55:34.126Z · LW(p) · GW(p)

Ah... I think I see now. The comment assumed that one would not want to eat kittens, and that IQ is equivalent or isomorphic to epistemic rationality, and then mapped that to giving up instrumental rationality in favor of epistemic rationality. Definitely could've used some explanation.

comment by A1987dM (army1987) · 2013-12-12T18:16:00.318Z · LW(p) · GW(p)

I'd guess 1 point of IQ is somewhere around equivalent to the cognitive boost I'd get by sleeping half an hour longer every day, and it'd take quite a few hours for me to earn enough money to buy a single kitten, so... no more than a couple per month, I'd guess. (And that's not even counting slaughtering and cooking the kittens or paying someone to do that.)

;-)

comment by PhilGoetz · 2009-04-02T17:25:58.249Z · LW(p) · GW(p)

I'm not aware of studies showing that those in the upper 10% overestimate their abilities. Anyone trying to increase their rationality is probably in the upper 10% already.

My recollection is that at least one study showed some regression to the mean in confidence -- highly-skilled people tended to underestimate themselves.

Replies from: SoullessAutomaton, John_Maxwell_IV
comment by SoullessAutomaton · 2009-04-02T20:50:27.393Z · LW(p) · GW(p)

My recollection is that at least one study showed some regression to the mean in confidence -- highly-skilled people tended to underestimate themselves.

To the best of my knowledge, this sort of effect is mostly not an effect of underestimation, but rather of either misestimating the skills of others, and/or of restricting the group against which one evaluates oneself.

Generally, it seems, if one has no knowledge of others' skills in an area, an estimation of one's own skill level is likely wildly inaccurate; if one's knowledge of others' skills is biased in favor of a non-random group, one's own skill is likely closer to the mean of that group than expected.

That is to say, if you're better at X than your friends are, you're probably not as good as you think; if you're jealous of your friends' skill at X you're probably better than you think. On the other hand, if you measure your success against the best and brightest in a field, you're probably wildly underestimating yourself; but if you're proud of being good at something and compare yourself to random people you meet you're probably substantially overestimating yourself.

People posting on LW are strongly self-selected in favor of rationality, so anyone reading this is probably closer to average within this community than they think they are!

Replies from: ciphergoth, Nick_Tarleton, Nick_Tarleton
comment by Paul Crowley (ciphergoth) · 2009-04-04T12:42:57.325Z · LW(p) · GW(p)

Drivers think they are comparing themselves to the other drivers they see on the road. In fact, they are comparing themselves to the drivers whose skill they have most cause to think about, which is overwhelmingly often the worst drivers on the road.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-04T13:01:17.458Z · LW(p) · GW(p)

Yes, this is the same principle of a biased relative estimate due to comparison against a non-random subset of others.

comment by Nick_Tarleton · 2009-04-04T07:27:26.774Z · LW(p) · GW(p)

This is great information. Do you have a link?

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-04T11:48:08.356Z · LW(p) · GW(p)

Sadly, no; I have a poor memory for details and have not generally been in the habit of saving links to my sources, something I need to correct. Chances are it's actually a synthesis of multiple sources, all of which I read a year or more ago. Mea culpa.

comment by Nick_Tarleton · 2009-04-04T07:26:48.444Z · LW(p) · GW(p)

Do you have a link to further information?

comment by John_Maxwell (John_Maxwell_IV) · 2009-04-02T18:33:38.921Z · LW(p) · GW(p)

Anyone trying to increase their rationality is probably in the upper 10% already.

Upper 10% of what? They might be in the upper 10% of truthseeking, but not the upper 10% of any number of other fields--which confidence could prove useful in.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-02T20:30:55.310Z · LW(p) · GW(p)

Good point, if we're talking about confidence in abilities in general, and not just in rationality.

comment by Nick_Tarleton · 2009-04-03T01:13:15.106Z · LW(p) · GW(p)

Who do you think is going to be more motivated to think about math: someone who feels it is their duty to become smarter, or a naive student who believes he or she has the answer to some mathematical problem and is only lacking a proof?

Or, how about the student who believes they may have the answer, and has a burning itch to know whether this is the case? Or the one with something to protect?

comment by Nick_Tarleton · 2009-04-03T01:11:07.266Z · LW(p) · GW(p)

Who do you think is going to be more motivated to think about math: someone who feels it is their duty to become smarter, or a naive student who believes he or she has the answer to some mathematical problem and is only lacking a proof?

Or, how about the student who believes they may have the answer, and has a burning itch to know whether this is the case?

(Really, though, it's going to be the one with something to protect.)

comment by Furcas · 2009-04-02T05:50:46.175Z · LW(p) · GW(p)

I'd say the benefits have to outweigh the costs. If you succeed in achieving your goal despite holding a significant number of false beliefs relevant to this goal, it means you got lucky: Your success wasn't caused by your decisions, but by circumstances that just happened to be right.

That the human brain is wired in such a way that self-deception gives us an advantage in some situations may tip the balance a little bit, but it doesn't change the fact that luck only favors us a small fraction of the time, by definition.

Replies from: pjeby, cousin_it, John_Maxwell_IV
comment by pjeby · 2009-04-02T14:22:56.908Z · LW(p) · GW(p)

That the human brain is wired in such a way that self-deception gives us an advantage in some situations may tip the balance a little bit, but it doesn't change the fact that luck only favors us a small fraction of the time, by definition.

On the contrary: "luck" is a function of confidence in two ways. First, people volunteer more information and assistance to those who are confident about a goal. And second, the confident are more likely to notice useful events and information relative to their goals.

Those two things are why people think the "law of attraction" has some sort of mystical power. It just means they're confident and looking for their luck.

comment by cousin_it · 2009-04-02T12:10:42.043Z · LW(p) · GW(p)

As the post hinted, self-deception can give you confidence which is useful in almost all real life situations, from soldier to socialite. Far from "tipping the balance a little bit", a confidence upgrade is likely to improve your life much more than any amount of rationality training (in the current state of our Art).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-04-02T15:49:18.230Z · LW(p) · GW(p)

Too vague. It's not clear what is your argument's denotation, but connotation (becoming overconfident is vastly better than trying to be rational) is a strong and dubious assertion that needs more support to move outside the realm of punditry.

Replies from: cousin_it, John_Maxwell_IV
comment by cousin_it · 2009-04-02T18:53:50.770Z · LW(p) · GW(p)

IMO John_Maxwell_IV described the benefits of confidence quite well. For the other side see my post where I explicitly asked people what benefit they derive from the OB/LW Art of Rationality in its current state. Sorry to say, there weren't many concrete answers. Comments went mostly along the lines of "well, no tangible benefits for me, but truth-seeking is so wonderful in itself". If you can provide a more convincing answer, please do.

comment by John_Maxwell (John_Maxwell_IV) · 2009-04-02T16:39:04.676Z · LW(p) · GW(p)

People who debate this often seem to argue for an all-or-nothing approach. I suspect the answer lies somewhere in the middle: be confident if you're a salesperson but not if you're a general, for instance. I might look like a member of the "always-be-confident" side to all you extreme epistemic rationalists, but I'm not.

Replies from: SoullessAutomaton, Annoyance
comment by SoullessAutomaton · 2009-04-02T21:05:28.213Z · LW(p) · GW(p)

People who debate this often seem to argue for an all-or-nothing approach. I suspect the answer lies somewhere in the middle: be confident if you're a salesperson but not if you're a general, for instance.

I think a better conclusion is: be confident if you're being evaluated by other people, but cautious if you're being evaluated by reality.

A lot of the confusion here seems to be people with more epistemic than instrumental rationality having difficulty with the idea of deliberately deceiving other people.

Replies from: John_Maxwell_IV, pjeby
comment by John_Maxwell (John_Maxwell_IV) · 2009-04-02T22:43:41.236Z · LW(p) · GW(p)

But there is another factor: humans are penalized by themselves for doubt. If they (correctly) estimate their ability as low, they may decide not to try at all, and therefore fail to improve. The doubt's what I'm interested in, not tricking others.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-02T23:23:17.950Z · LW(p) · GW(p)

A valid point! However, I think it is the decision to not try that should be counteracted, not the levels of doubt/confidence. That is, cultivate a healthy degree of hubris--figure out what you can probably do, then aim higher, preferably with a plan that allows a safe fallback if you don't quite make it.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-04-02T23:52:09.353Z · LW(p) · GW(p)

If I could just tell myself to do things and then do them exactly how I told myself, my life would be fucking awesome. Planning isn't hard. It's the doing that's hard.

Someone could (correctly) estimate their ability as low and rationally give it a try anyway, but I think their effort would be significantly lower than someone who knew they could do something.

Edit: I just realized that someone reading the first paragraph might get the idea that I'm morbidly obese or something like that. I don't have any major problems in my life--just big plans that are mostly unrealized.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-03T00:10:16.462Z · LW(p) · GW(p)

You may be correct, and as someone with a persistent procrastination problem I'm in no position to argue with your point.

But still, I am hesitant to accept a blatant hack (actual self-deception) over a more elegant solution (finding a way to expend optimal effort while still having a rational evaluation of the likelihood of success).

For instance, I believe that another LW commenter, pjeby, has written about the issues related to planning vs. doing on his blog.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-04-03T00:39:09.969Z · LW(p) · GW(p)

Yeah, I've read some of pjeby's stuff, and I remember being surprised by how non-epistemically rational his tips were, given that he posts here. (If I had remembered any of the specific tips, I probably would have included them.)

If you change your mind and decide to take the self-deception route, I recommend this essay and subsequent essays as steps to indoctrinate yourself.

Replies from: pjeby, SoullessAutomaton
comment by pjeby · 2009-04-03T00:53:21.394Z · LW(p) · GW(p)

I'm not an epistemical rationalist, I'm an instrumental one. (At least, if I understand those terms correctly.)

That is, I'm interested in maps that help me get places, whether they "accurately" reflect the territory or not. Sometimes, having a too-accurate map -- or spending time worrying about how accurate the map is -- is detrimental to actually accomplishing anything.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-03T01:16:06.427Z · LW(p) · GW(p)

As is probably clear, I am an epistemological rationalist in essence, attempting to understand and cultivate instrumental rationality, because epistemological rationality itself forces me to acknowledge that it alone is insufficient, or even detrimental, to accomplishing my goals.

Reading Less Wrong, and observing the conflicts between epistemological and instrumental rationality, has ironically driven home the point that one of the keys to success is carefully managing controlled self-deception.

I'm not sure yet what the consequences of this will be.

Replies from: pjeby
comment by pjeby · 2009-04-03T03:44:39.748Z · LW(p) · GW(p)

It's not really self-deception -- it's selective attention. If you're committed to a course of action, information about possible failure modes is only relevant to the extent that it helps you avoid them. And for the most useful results in life, most failures don't happen so rapidly that you don't get any warning, or so catastrophic as to be uncorrectable afterwards.

Humans are also biased towards being socially underconfident, because in our historic environment, the consequences of a social gaffe could be significant. In the modern era, though, it's not that common for a minor error to produce severe consequences -- you can always start over someplace else with another group of people. So that's a very good example of an area where more factual information can lead to enhanced confidence.

A major difference between the confident and unconfident is that the unconfident focus on "hard evidence" in the past, while the confident focus on "possibility evidence" in the future. When an optimist says "I can", it means, "I am able to develop the capability and will eventually succeed if I persist". Whereas a pessimist may only feel comfortable saying "I can" if they mean, "I have done it before."

Neither one of them is being "self-deceptive" -- they are simply selecting different facts to attend to (or placing them in different contexts), resulting in different emotional and motivational responses. "I haven't done this before" may well mean excitement and challenge to the optimist, but self-doubt and fear for the pessimist. (See also fixed vs. growth mindsets.)

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-04-03T04:10:05.222Z · LW(p) · GW(p)

Humans are also biased towards being socially underconfident, because in our historic environment, the consequences of a social gaffe could be significant.

I wish I could upmod you twice for this.

comment by SoullessAutomaton · 2009-04-03T01:00:51.731Z · LW(p) · GW(p)

Yeah, I've read some of pjeby's stuff, and I remember being surprised by how non-epistemically rational his tips were, given that he posts here.

Nowhere is it guaranteed that, given the cognitive architecture humans have to work with, epistemic rationality is the easiest instrumentally rational manner to achieve a given goal.

But, personally, I'm still holding out for a way to get from the former to the latter without irrevocable compromises.

Replies from: pjeby
comment by pjeby · 2009-04-03T03:33:47.763Z · LW(p) · GW(p)

Nowhere is it guaranteed that, given the cognitive architecture humans have to work with, epistemic rationality is the easiest instrumentally rational manner to achieve a given goal.

But, personally, I'm still holding out for a way to get from the former to the latter without irrevocable compromises.

It's easier than you think, in one sense. The part of you that worries about that stuff is significantly separate from -- and to some extent independent of -- the part of you that actually makes you do things. It doesn't matter whether "you" are only 20% certain about the result as long as you convince the doing part that you're 100% certain you're going to be doing it.

Doing that merely requires that you 1) actually communicate with the doing part (often a non-trivial learning process for intellectuals such as ourselves), and 2) actually take the time to do the relevant process(es) each time it's relevant, rather than skipping it because "you already know".

Number 2, unfortunately, means that akrasia is quasi-recursive. It's not enough to have a procedure for overcoming it, you must also overcome your inertia against applying that procedure on a regular basis. (Or at least, I have not yet discovered any second-order techniques to get myself or anyone else to consistently apply the first-order techniques... but hmmm... what if I applied a first-order technique to the second-order domain? Hmm.... must conduct experiments...)

comment by pjeby · 2009-04-03T00:57:50.193Z · LW(p) · GW(p)

I think a better conclusion is: be confident if you're being evaluated by other people, but cautious if you're being evaluated by reality.

An excellent heuristic, indeed!

comment by Annoyance · 2009-04-02T18:22:06.562Z · LW(p) · GW(p)

It depends on the cost of overconfidence. Nothing ventured, nothing gained. But if the expected cost of venturing wrongly is greater than the expected return, it's better to be careful what you attempt. If the potential loss is great enough, cautiousness is a virtue. If there's little investment to lose, cautiousness is a vice.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-04-02T16:28:34.193Z · LW(p) · GW(p)

OK, I see you don't believe me that you should sometimes accept and sometimes reject epistemic rationality for a price. So here's a simple mathematical model:

Let's say agent A accepts the offer of increased epistemic rationality for a price, and agent N has not accepted it. P is the probability A will decide differently than N. F(A or N) is the expected value of N's original course of action as a function of the agent who takes it, while S(A) is the expected value of the course of action that A might switch to. If there is a cost C associated with becoming agent A, then agent N should become agent A if and only if

(1 - P) F(A) + P S(A) - C >= F(N)

The left side of the equation is not bigger than the right side "by definition"; it depends on the circumstance. Eliezer's dessert-ordering example is a situation where the above inequality does not hold.

If you complain that agent N can't possibly know all the variables in the equation, then I agree with you. He will be estimating them somewhat poorly. However, that complaint in no way supports the view that the left side is in fact bigger. Someone once said that "Anything you need to quantify can be measured in some way that is superior to not measuring it at all." Just like the difficulty of measuring utility is not a valid objection to utilitarianism, the difficulty of guessing what a better-informed self would do is not a valid objection to using this equation.

that luck only favors us a small fraction of the time, by definition.

That's a funny definition of "luck" you're using.

Replies from: Furcas
comment by Furcas · 2009-04-02T18:28:56.921Z · LW(p) · GW(p)

Yes, the right side can be bigger, and occasionally it will be. If you get lucky.

If the information that N chooses to remain ignorant of happens to be of little relevance to any decision N will take in the future, and if his self-deception allows him to be more confident than he would have been otherwise, and if this increased confidence grants him a significant advantage, then the right side of the equation will be bigger than the left side.

That's a funny definition of "luck" you're using.

It is? Why do you think people are pleasantly surprised when they get lucky, if not because it's a rare occurrence?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-04-02T18:43:00.765Z · LW(p) · GW(p)

If the information that N chooses to remain ignorant of happens to be of little relevance to any decision N will take in the future, and if his self-deception allows him to be more confident than he would have been otherwise, and if this increased confidence grants him a significant advantage, then the right side of the equation will be bigger than the left side.

Not quite.

  • The information could be of high relevance, but it could so happen that it won't cause him to change his mind.

  • He could be choosing among close alternatives, so switching to a slightly better alternative could be of limited value.

  • Remember also that failure to search for disconfirming evidence doesn't necessarily constitute self-deception.

It is? Why do you think people are pleasantly surprised when they get lucky, if not because it's a rare occurrence?

Sorry, I guess your definition of luck was reasonable. But in this case, it's not necessarily true that the probability of the right side being greater is lower than 50%. In which case you wouldn't always have to "get lucky".

Replies from: Furcas
comment by Furcas · 2009-04-02T22:49:03.319Z · LW(p) · GW(p)

I've been thinking about this on and off for an hour, and I've come to the conclusion that you're right.

My mistake comes from the fact that the examples I was using to think about this were all examples where one has low certainty about whether the information is irrelevant to one's decision making. In this case, the odds are that being ignorant will yield a less than maximal chance of success. However, there are situations in which it's possible to know with great certainty that some piece of information is irrelevant to one's decision making, even if you don't know what the information is. These situations are mostly those that are limited in scope and involve a short-term goal, like giving a favorable first impression, or making a good speech. For instance, you might suspect that your audience hates your guts, and knowing that this is in fact the case would make you less confident during your speech than merely suspecting it, so you'd be better off waiting after the speech to find out about this particular fact.

Although, if I were in that situation, and they did hate my guts, I'd rather know about it and find a way to remain confident that doesn't involve willful ignorance. That said, I have no difficulty imagining a person who is simply incapable of finding such a way.

I wonder, do all situations where instrumental rationality conflicts with epistemic rationality have to do with mental states over which we have no conscious control?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-04-02T23:58:04.633Z · LW(p) · GW(p)

I've been thinking about this on and off for an hour, and I've come to the conclusion that you're right.

Wow, this must be like the 3rd time that someone on the internet has said that to me! Thanks!

Although, if I were in that situation, and they did hate my guts, I'd rather know about it and find a way to remain confident that doesn't involve willful ignorance.

If you think of a way, please tell me about it.

I wonder, do all situations where instrumental rationality conflicts with epistemic rationality have to do with mental states over which we have no conscious control?

Information you have to pay money for doesn't fit into this category.