We Change Our Minds Less Often Than We Think

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-03T18:14:52.000Z · LW · GW · Legacy · 120 comments

Contents

120 comments

Over the past few years, we have discreetly approached colleagues faced with a choice between job offers, and asked them to estimate the probability that they will choose one job over another. The average confidence in the predicted choice was a modest 66%, but only 1 of the 24 respondents chose the option to which he or she initially assigned a lower probability, yielding an overall accuracy rate of 96%.

—Dale Griffin and Amos Tversky1

When I first read the words above—on August 1st, 2003, at around 3 o’clock in the afternoon—it changed the way I thought. I realized that once I could guess what my answer would be—once I could assign a higher probability to deciding one way than other—then I had, in all probability, already decided. We change our minds less often than we think. And most of the time we become able to guess what our answer will be within half a second of hearing the question.

How swiftly that unnoticed moment passes, when we can’t yet guess what our answer will be; the tiny window of opportunity for intelligence to act. In questions of choice, as in questions of fact.

The principle of the bottom line is that only the actual causes of your beliefs determine your effectiveness as a rationalist. Once your belief is fixed, no amount of argument will alter the truth-value; once your decision is fixed, no amount of argument will alter the consequences.

You might think that you could arrive at a belief, or a decision, by non-rational means, and then try to justify it, and if you found you couldn’t justify it, reject it.

But we change our minds less often—much less often—than we think.

I’m sure that you can think of at least one occasion in your life when you’ve changed your mind. We all can. How about all the occasions in your life when you didn’t change your mind? Are they as available, in your heuristic estimate of your competence?

Between hindsight bias, fake causality, positive bias, anchoring/priming, et cetera, et cetera, and above all the dreaded confirmation bias, once an idea gets into your head, it’s probably going to stay there.

1Dale Griffin and Amos Tversky, “The Weighing of Evidence and the Determinants of Confidence,” Cognitive Psychology 24, no. 3 (1992): 411–435.

120 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Doug_S. · 2007-10-03T19:57:32.000Z · LW(p) · GW(p)

I hate changing my mind based on my parents' advice because I want to demonstrate that I'm capable of making good decisions on my own, especially since we seem to disagree on some fundamental values. Specifically, they love their jobs and put a moral value on productivity, while my goal in life is to "work" as little as possible and have as much "fun" as possible.

Replies from: None
comment by [deleted] · 2013-09-04T14:59:19.460Z · LW(p) · GW(p)

I hate changing my mind based on my parents' advice because I want to demonstrate that I'm capable of making good decisions on my own...

Eliezer never said to change your mind based on wrong advice! However, if you feel as if you should be following your parents' advice, perhaps you should question exactly how capable you really are (at the moment).

comment by Felix2 · 2007-10-03T22:54:59.000Z · LW(p) · GW(p)

Does this mean that if we cannot remember ever changing our minds, our minds are very good at removing clutter?

Or, consider a question that you've not made up your mind on: Does this mean that you're most likely to never make up your mind?

And, anyway, in light of those earlier posts concerning how well people estimate numeric probabilities, should it be any wonder that 66% = 96%?

Replies from: DanielLC
comment by DanielLC · 2010-09-05T18:53:15.489Z · LW(p) · GW(p)

Don't they normally make them more certain? Like, if they're 96% sure, there's a 66% chance that they're right, rather than the other way around?

comment by Adirian · 2007-10-03T23:06:11.000Z · LW(p) · GW(p)

Not to argue, but to point out, that this is not necessarily a bad thing. It depends entirely on the basis of one's conclusion. Gut instincts are quite often correct about things we have no conscious evidence for - because our unconscious does have pretty good evidence filters. Which is one of the reasons I suggested rationalization is not necessarily a bad thing, as it can be used to construct a possible rational basis for conceptualizations developed without conscious thought, thus permitting us to judge the merit of those ideas.

comment by Constant2 · 2007-10-03T23:58:46.000Z · LW(p) · GW(p)

Here is one way to change your mind. Think through something carefully, relying on strong connections. You may at some point walk right into a conclusion that contradicts a previous opinion. At this point something will give. The strength of this method is that it is strengthened by the very attachment to your ideas that it undermines. The more stubborn you are, the harder you push against your own stubbornness.

comment by Senthil · 2007-10-04T04:09:45.000Z · LW(p) · GW(p)

I agree with Adirian that not changing our minds is not necessarily a bad thing.

The problem, I guess, like with most things is we can't be sure which way to go. Gut feelings are often quite correct. But how do we know when we are having a bias which is not good for us and when it's a gut feeling? Gut feelings inherently aren't questionable. Biases need to be kept in check.

If we run through the standard biases and logical fallacies like a checklist and what we think doesn't fall in any of them, we can go with our gut instinct. Else, give whatever we have in mind a second thought. What we do may not be foolproof but it at least takes us in a direction which would makes changing our minds, when required, a less painful process.

comment by michael_vassar3 · 2007-10-04T05:18:29.000Z · LW(p) · GW(p)

It probably doesn't help to live in a society where changing one's positions in response to evidence is considered "waffling", and is considered to show a lack of conviction.

Divorce is a lot more common than 4%, so people do admit mistakes when given enough evidence.

Replies from: Viliam_Bur, Peterdjones, tlhonmey
comment by Viliam_Bur · 2011-10-29T11:38:34.021Z · LW(p) · GW(p)

Changing your mind or "updating" is not necessarily a sign of rationality. You could also update for wrong reasons.

For example, a divorce can happen when a person has unrealistic expectations on marriage. Updating their beliefs about their partner would be just a side effect of refusing to update their beliefs about marriage.

Also, in some cases, the divorce could have been planned since the beginning (for example for financial gain), so it actually did not include a change of mind.

comment by Peterdjones · 2011-10-29T13:56:46.702Z · LW(p) · GW(p)

I think the embargo on mind-changing is a special case for politiicians: after all, if they say one thing on the hustings, and then do another in office, that makes a mockery of democracy. However, if it is applied to non-pliticians, that would be fallacious.

Replies from: sparkles, Izeinwinter
comment by sparkles · 2013-02-17T19:12:07.686Z · LW(p) · GW(p)

If they say one thing and intend to do another, sure - but if they actually update? That may be bad PR, but I don't think it's undemocratic.

Replies from: Peterdjones
comment by Peterdjones · 2013-02-27T23:57:04.346Z · LW(p) · GW(p)

If you can;t rely on politicians to do something like what hey said they were going to, what's the point in voting? ideally, a pl who has a change of heart should stand for re-election.

Replies from: wedrifid
comment by wedrifid · 2013-02-28T19:33:39.540Z · LW(p) · GW(p)

If you can;t rely on politicians to do something like what hey said they were going to, what's the point in voting?

You could have a prediction about what they respectively will do and have a preference over those outcomes.

Replies from: Peterdjones
comment by Peterdjones · 2013-03-03T14:14:56.671Z · LW(p) · GW(p)

So if they ruin the economy, and I successfully predict that, I smile and collect my winnings?

Replies from: Kindly, wedrifid
comment by Kindly · 2013-03-03T18:26:49.499Z · LW(p) · GW(p)

Presumably if you can predict that Candidate A will ruin the economy, then you vote for Candidate B instead.

Unless you can think of a way of winning by having advance knowledge that the economy will be ruined, which will net you greater gain than having an un-ruined economy would be. Then you may selfishly vote for Candidate A.

I'm ignoring here the question of how much your opinion influences the outcome of the election, of course. Also if you end up predicting that all the candidates will ruin the economy equally, you don't have much of a decision to make.

Replies from: Peterdjones
comment by Peterdjones · 2013-03-03T18:36:12.387Z · LW(p) · GW(p)

Presumably if you can predict that Candidate A will ruin the economy, then you vote for Candidate B instead.

I can only predict what will happen on the basis that a) their policies will have a certain effect and b) they will actually implement their policies. Which gets back to the original point: if they are not going to do what they say, what is the point of voting?

Replies from: Kindly, wedrifid
comment by Kindly · 2013-03-03T20:30:47.519Z · LW(p) · GW(p)

I think I agree. I also think wedrifid wanted to talk about predictions of what the candidates do, even if they are not guaranteed not to change their mind.

This doesn't seem impossible, just harder. You'd have to make a guess as to how likely the candidates are to implement a different policy from the one they promised, as well as the effect the possible policies will have.

The candidates do have an incentive to signal that they are unlikely to "waffle". If you are relatively certain to implement your policies, then at least those who agree with you will predict that you'll have a good effect. If you look like you might change your mind, even your supporters might decide to take a different option, because who knows what you will do?

In theory, you might gain a bigger advantage by somehow signaling that you will change your mind for good reasons. Then if new information comes up in the future, you're a better choice than anyone who promises not to change their mind at all. But this is trickier and less convincing.

comment by wedrifid · 2013-03-04T04:33:38.672Z · LW(p) · GW(p)

I can only predict what will happen on the basis that a) their policies will have a certain effect and b) they will actually implement their policies.

That seems to be a significant limitation.

Which gets back to the original point: if they are not going to do what they say, what is the point of voting?

Fortunately, not everybody has said limitation.

comment by wedrifid · 2013-03-04T03:27:59.886Z · LW(p) · GW(p)

So if they ruin the economy, and I successfully predict that, I smile and collect my winnings?

Both candidates being likely to successfully manage to ruin the economy is a problem quite distinct from politicians lying.

comment by Izeinwinter · 2013-03-03T19:10:40.328Z · LW(p) · GW(p)

You misrepresent democracy very badly in the above post. Politicians are not agents of the voters, they are representatives of them, appointed by, and accountable to the demos, but not a mirror of it- they are not supposed to enact the policies voters thought appropriate 2 years ago at the polls, or what polls well today. They are supposed to do what the voters would want done if they had time to research the issue and give it some thought, incorporating all data about the present situation. If policy was supposed to reflect the averaged will of the people politicians would be entirely redundant and we could just do lawmaking by popular initiative.

Replies from: Peterdjones
comment by Peterdjones · 2013-03-03T19:23:11.710Z · LW(p) · GW(p)

Of course it is unworkable for politicians to stick rigidly to their manifestos. It is also unworkable for them to discard their manifestos on day one.

comment by tlhonmey · 2021-01-12T17:51:22.039Z · LW(p) · GW(p)

On the other hand, of the people I know who have gotten divorced, refusal to admit mistakes seems to be one of the leading causes...

comment by Tony · 2007-10-04T14:53:19.000Z · LW(p) · GW(p)

I wonder if the act of answering the question actually causes the decision to firm up. Kind of the OvercomingBias Uncertainty Principle.

comment by Robin_Hanson2 · 2007-10-04T15:03:24.000Z · LW(p) · GW(p)

It is nice to have a clear example of where people are consistently underconfident. Are there others? Michael, good point about divorce.

Replies from: Peacewise
comment by Peacewise · 2011-10-29T10:19:45.753Z · LW(p) · GW(p)

In my experience teenagers are often underconfident about their parents decision making ability... and indeed overconfident about their own decision making ability.

Many women seem to be underconfident about their driving skills - the consequence of this is that they have less accidents! Though maybe, being a male, I'm overconfident and hence judge their appropriate confidence as underconfidence.

Replies from: gwern
comment by gwern · 2012-01-04T00:50:43.103Z · LW(p) · GW(p)

Are these kids just being stupid? That's the conventional explanation: They're not thinking, or by the work-in-progress model, their puny developing brains fail them. Yet these explanations don't hold up. As Laurence Steinberg, a developmental psychologist specializing in adolescence at Temple University, points out, even 14- to 17-year-olds—the biggest risk takers—use the same basic cognitive strategies that adults do, and they usually reason their way through problems just as well as adults. Contrary to popular belief, they also fully recognize they're mortal. And, like adults, says Steinberg, "teens actually overestimate risk." So if teens think as well as adults do and recognize risk just as well, why do they take more chances? Here, as elsewhere, the problem lies less in what teens lack compared with adults than in what they have more of. Teens take more risks not because they don't understand the dangers but because they weigh risk versus reward differently: In situations where risk can get them something they want, they value the reward more heavily than adults do. A video game Steinberg uses draws this out nicely. In the game, you try to drive across town in as little time as possible. Along the way you encounter several traffic lights. As in real life, the traffic lights sometimes turn from green to yellow as you approach them, forcing a quick go-or-stop decision. You save time—and score more points—if you drive through before the light turns red. But if you try to drive through the red and don't beat it, you lose even more time than you would have if you had stopped for it. Thus the game rewards you for taking a certain amount of risk but punishes you for taking too much. When teens drive the course alone, in what Steinberg calls the emotionally "cool" situation of an empty room, they take risks at about the same rates that adults do. Add stakes that the teen cares about, however, and the situation changes. In this case Steinberg added friends: When he brought a teen's friends into the room to watch, the teen would take twice as many risks, trying to gun it through lights he'd stopped for before. The adults, meanwhile, drove no differently with a friend watching.

http://ngm.nationalgeographic.com/print/2011/10/teenage-brains/dobbs-text (emphasis added)http://online.wsj.com/article/SB10001424052970203806504577181351486558984.html

EDIT: "What's Wrong With the Teenage Mind?", WSJ:

Recent studies in the neuroscientist B.J. Casey's lab at Cornell University suggest that adolescents aren't reckless because they underestimate risks, but because they overestimate rewards—or, rather, find rewards more rewarding than adults do. The reward centers of the adolescent brain are much more active than those of either children or adults. Think about the incomparable intensity of first love, the never-to-be-recaptured glory of the high-school basketball championship.

What teenagers want most of all are social rewards, especially the respect of their peers. In a recent study by the developmental psychologist Laurence Steinberg at Temple University, teenagers did a simulated high-risk driving task while they were lying in an fMRI brain-imaging machine. The reward system of their brains lighted up much more when they thought another teenager was watching what they did—and they took more risks.

...Simply increasing the driving age by a year or two doesn't have much influence on the accident rate, for example. What does make a difference is having a graduated system in which teenagers slowly acquire both more skill and more freedom—a driving apprenticeship. Instead of simply giving adolescents more and more school experiences—those extra hours of after-school classes and homework—we could try to arrange more opportunities for apprenticeship. AmeriCorps, the federal community-service program for youth, is an excellent example, since it provides both challenging real-life experiences and a degree of protection and supervision. "Take your child to work" could become a routine practice rather than a single-day annual event, and college students could spend more time watching and helping scientists and scholars at work rather than just listening to their lectures. Summer enrichment activities like camp and travel, now so common for children whose parents have means, might be usefully alternated with summer jobs, with real responsibilities.

EDITEDIT: http://pss.sagepub.com/content/19/7/650.short

Soman (2004) offered loss aversion as a potential explanation for the sunk-cost fallacy. Supporting evidence comes from research in which young adults have reported that their sunk-cost decisions are motivated by loss avoidance (Frisch, 1993). This focus on losses may reflect younger adults' negativity bias in information processing. Younger adults weigh negative information more heavily than positive information (Baumeister, Bratslavsky, Finkenauer, & Vohs, 2001). In contrast, older adults demonstrate a positivity effect (Carstensen & Mikels, 2005). Their decisions reflect a more balanced view of gains and losses (Wood, Busemeyer, Koling, Cox, & Davis, 2005). If older adults are less likely than younger adults to focus exclusively on losses, and loss aversion contributes to the sunk-cost fallacy, then older adults may be less likely than younger adults to commit the sunk-cost fallacy.

Replies from: Peacewise
comment by Peacewise · 2012-01-04T02:44:30.913Z · LW(p) · GW(p)

An interesting article gwern thanks.

It provides a rationale in support of my statement that teenagers are often overconfident in their decision making ability. The article argues that teenagers reward perception is higher and hence the risk seems reasonable compared to the "high" reward; that is one form of overconfidence.

Certainly is interesting to see that adults aren't affected by having a peer present during the experiment, whereas teenagers are more likely to engage in risk taking, perhaps better called - reward seeking - when their peer is watching during the experiment.

Replies from: gwern
comment by gwern · 2012-01-04T03:50:32.325Z · LW(p) · GW(p)

It provides a rationale in support of my statement that teenagers are often overconfident in their decision making ability.

But I posted that to say exactly the opposite.

When deciding whether to take a risk, the risk and reward are the two most important factors. It's easy to criticize someone for failing to accurately judge the risk: you simply collect highly accurate statistics on the risk, and compare with what the person says when you ask. But as the article says, when we do this, the teenagers are not discounting risks, but exaggerating them.

So that leaves only the reward. How do you prove someone is wrong about how much they will enjoy the reward for taking the risk? It seems rather hard to do, and you definitely haven't done it... The peer thing doesn't prove anything about them being wrong: gaining a reputation for bravery or risk-taking or gratifying your peers and other social factors are present simultaneously. (If you have ever shown off for friends, or watched someone show off, you will understand the rewards are substantial.)

Replies from: Peacewise
comment by Peacewise · 2012-01-04T08:35:10.245Z · LW(p) · GW(p)

The teenager in the article didn't exaggerate the risks when driving at 113 mph, he didn't even consider the risk of getting caught. Is a trip to court, lawyers fee, several fines and risking death worth the thrill of driving 113 mph? You tell me.

One might also consider the reality of self serving bias. The teenager paints himself in the best kind of light with regards to safety. He gets caught doing 113 mph and is peeved he's been charged with "reckless driving", NO he says, I wasn't reckless I wasn't just gunning it, I was driving, the road is dry and straight, it was daytime - all these comments of his are designed to make him sound as if he isn't reckless. Yet the expert, the police officer charges him with reckless driving. Does the police officer have it wrong? Is driving 113mph on a public road reckless? The article does support my observation about teenagers. That particular teenager is overconfident in his ability to decide what is reckless.

Replies from: wedrifid, gwern
comment by wedrifid · 2012-01-04T09:18:20.503Z · LW(p) · GW(p)

One might also consider the reality of self serving bias. The teenager paints himself in the best kind of light with regards to safety. He gets caught doing 113 mph and is peeved he's been charged with "reckless driving", NO he says, I wasn't reckless I wasn't just gunning it, I was driving, the road is dry and straight, it was daytime - all these comments of his are designed to make him sound as if he isn't reckless. Yet the expert, the police officer charges him with reckless driving. Does the police officer have it wrong? Is driving 113mph on a public road reckless? The article does support my observation about teenagers. That particular teenager is overconfident in his ability to decide what is reckless.

It's not especially reckless. It is 'Reckless Driving'. Disobeying the law to that degree for little payoff is a dumbass move, not so much a physically reckless one.

comment by gwern · 2012-01-04T16:41:28.065Z · LW(p) · GW(p)

The teenager in the article didn't exaggerate the risks when driving at 113 mph, he didn't even consider the risk of getting caught.

So you say.

Is a trip to court, lawyers fee, several fines and risking death worth the thrill of driving 113 mph? You tell me.

You exaggerate. That's only if you are caught and worst-case scenarios if you are caught to boot. Is it worth it? Ask any skydiver; I've gone skydiving, and it is amazing. And I'm not even a teenager any more.

(This sounds like the usual generalizing problem: "I don't think that sounds insanely fun and awesome, so obviously no teenager can find it that rewarding and by the previous logic, these teenagers must be making extremely biased assessments of risk; they should stop that. Also, these teens should just stop laying in bed all morning and staying up all night." You are a respectable sober adult, I should not be surprised to learn.)

One might also consider the reality of self serving bias. The teenager paints himself in the best kind of light with regards to safety. He gets caught doing 113 mph and is peeved he's been charged with "reckless driving", NO he says, I wasn't reckless I wasn't just gunning it, I was driving, the road is dry and straight, it was daytime - all these comments of his are designed to make him sound as if he isn't reckless.

Yes, I'm sure the scientists conducting these risk-assessment surveys are so moronic that they only asked teenagers immediately after taking a risk and getting burned, and never even once thought about cognitive dissonance or other such issues.

Replies from: Peacewise
comment by Peacewise · 2012-01-04T18:33:44.377Z · LW(p) · GW(p)

"So you say." Nope, so the author of the article reveals by relaying what the driver said (and implied) about risks. Further it's obvious the teenage driver didn't drive the route before his speed run, or he'd have likely seen the police officer who busted him and not have done the speed run at that time, that probable lack of a pre-drive increased his risk of accident and made certain he got busted that particular time.

"You exaggerate. That's only if you are caught and worst-case scenarios if you are caught to boot. Is it worth it? Ask any skydiver; I've gone skydiving, and it is amazing. And I'm not even a teenager any more. (This sounds like the usual generalizing problem; "I don't think that sounds insanely fun and awesome, so obviously no teenager can find it that rewarding. Also, these teens should just stop laying in bed all morning and staying up all night." You are a respectable sober adult, I should not be surprised to learn.)"

First it wasn't an exaggeration it was the facts as revealed by the article you posted! Second, the worst case scenario I can think of isn't killing himself, it's driving at 113 mph into a school bus and killing and maiming 40 children, then surviving and being paralysed from the neck down for the remainder of his life which is spent in a prison hospital. Third, one third of american teenage deaths are in motor vehicles. http://www.cdc.gov/nchs/data/databriefs/db37.htm

With regards to your skydiving example, how about you be a good chap and link the appropriate lesswrong description for ludicrously weak analogy.

As for your inclination to dismiss me due to your (somewhat inaccurate) stereotyping, like seriously mate, please put a sock in it I came to lesswrong hoping to get away from that kind of immature nonsense.

Replies from: thomblake, thomblake, gwern
comment by thomblake · 2012-01-04T18:40:30.058Z · LW(p) · GW(p)

As for your inclination to dismiss me due to your (somewhat inaccurate) stereotyping, like seriously mate, please put a sock in it I came to lesswrong hoping to get away from that kind of immature nonsense.

I hope this was deliberate irony to be funny.

That is, the 'stereotype' you are offended by seems to be 'respectable sober adult', and you respond with an accusation of "immature nonsense".

Replies from: Peacewise
comment by Peacewise · 2012-01-04T19:36:00.127Z · LW(p) · GW(p)

I'm disappointed that gwern, a presumably respected poster, going by his karma and post count, jumps so quickly to typical internet trash talk.

Replies from: Vaniver, thomblake
comment by Vaniver · 2012-01-04T19:49:32.571Z · LW(p) · GW(p)

jumps so quickly to typical internet trash talk.

Is it typical internet trash talk to suggest that you enjoy risks less than others might?

comment by thomblake · 2012-01-04T20:03:59.864Z · LW(p) · GW(p)

typical internet trash talk.

I'm interested to learn how to find the Internet you're familiar with - it sounds remarkably more civil than the one I use.

If anything, your "be a good chap" seemed condescending well beyond any 'trash talk' qualities that might be present in what gwern wrote.

comment by thomblake · 2012-01-04T18:55:52.956Z · LW(p) · GW(p)

With regards to your skydiving example, how about you be a good chap and link the appropriate lesswrong description for ludicrously weak analogy.

I don't believe there is one. Also, it's on the strong side as analogies go - it's a risky behavior that is a lot of fun, specifically from going very fast. What would be a better analogy, going fast in speedboats?

Third, one third of american teenage deaths are in motor vehicles.

Maybe I'm misreading, but it looks to me like a little over 35%. That said, I don't see how it's relevant. If one teenager died every 20 years and 1/3 of them were in motor vehicles, would that imply anything? How does the 1/3 of deaths relate to anything about proper analysis of risks, and is anything similar implied by the 13% that die from homicide? Should teens stop going to places where there are other humans, even though it's enjoyable, because someone there might kill them?

comment by gwern · 2012-01-06T00:05:51.187Z · LW(p) · GW(p)

Nope, so the author of the article reveals by relaying what the driver said (and implied) about risks.

So your interpretation of the anecdote as presented by the author overrides the stated summary of the surveys by an involved academic?

Further it's obvious the teenage driver didn't drive the route before his speed run, or he'd have likely seen the police officer who busted him and not have done the speed run at that time, that probable lack of a pre-drive increased his risk of accident and made certain he got busted that particular time.

Your precautions do not eliminate the risk (do police officers not move?), and further, they are non sequiturs: listing possible precautions do not prove or disprove anything about teenegers' risk perceptions and receiver rewards, neither their elevation or reduction.

Third, one third of american teenage deaths are in motor vehicles. http://www.cdc.gov/nchs/data/databriefs/db37.htm

What Thom said. Sumner has a good maxim, 'never reason from a price change', that applies here as well. Prices have at least two factors, demand and supply, which interact to give the price - but two factors means you can't reason backwards from the price (or its change) to infer how or whether either the demand or supply changed. We are dealing with an equation with even more variables than a simple supply-demand graph, of which the death-rate is only one and already addressed by the risk underestimation. It is not very useful to learn of a rate with no context or information on what the best rate is. (Another economist said something to the effect that, I don't know what the best number of falling buildings in an earthquake is but it probably is non-zero. Similar observations are true of risk-taking in general.)

As for your inclination to dismiss me due to your (somewhat inaccurate) stereotyping, like seriously mate, please put a sock in it I came to lesswrong hoping to get away from that kind of immature nonsense.

Your failure to deal at all seriously with the idea (that teens do derive large amounts of utility from the risky activities and this justifies them) isn't very appropriate for LW. I did not stereotype, I drew the logical conclusion from an age-related neurobiological change combined with a lack of empathy that the community has frequently noticed, and I did so in a deliberately non-insulting way.

(Had I intended to be immature, I would have gone with something like 'coward' or 'age-dulled senses' as descriptions of older non-teens' reduced enjoyment of the risky behavior under discussion.)

Replies from: Peacewise
comment by Peacewise · 2012-01-06T15:03:50.104Z · LW(p) · GW(p)

Gwern wrote "(This sounds like the usual generalizing problem: "I don't think that sounds insanely fun and awesome, so obviously no teenager can find it that rewarding and by the previous logic, these teenagers must be making extremely biased assessments of risk; they should stop that. Also, these teens should just stop laying in bed all morning and staying up all night." You are a respectable sober adult, I should not be surprised to learn.)"

Gwern, you stated the above ad hominem, which I find insulting, regardless of whether you meant it as such. You implied that I was thinking such things - both the words AND the method aren't conducive to civil discussion, hence I responded with less civility than I would have preferred. You attacked my character as a means of dismissing my discussion, it's a low tactic and one I didn't expect from LW, it is indeed the typical internet trash talk I mentioned.

My interpretation of the anecdote reveals that the anecdote doesn't necessarily support your argument, whilst the anecdote also doesn't necessarily support the remainder of the article. It's quite clear that the driver didn't overestimate the risks for he was busted for reckless driving. The charges of reckless driving were made by an expert, on the scene - not by either you, me or anyone else "viewing" the incident in text some time later. I maintain that the police officer is a better judge of the event than either the driver or you or I, hence it is more likely that it was in fact reckless driving than it wasn't reckless driving. Further since there is no mention of a passenger in the car, and the tone of the article leads me to believe that the author would have mentioned a passenger since the driver would have been risking someone else's life - something of note - we can minimize the notion of peer induced reward as no one was watching. I say minimize, not discount, for no doubt the driver will tell his peers and perhaps gain some status in the storytelling.

With regards to my "failure to deal at all seriously...", I feel I deal with the issue of teenage overconfidence quite seriously, for I acknowledge that teenagers do undertake risky behaviour, and that (studies mentioned in the article) show that they perceive the rewards outweigh the risk. It is quite clear that in perceiving their rewards with such a high (subjective/personal) value, many of them have indeed made an error for one third teenager deaths are in car accidents. One should consider if death, both the risk of it and the actual occurrence of it is truly a fair price to pay for driving fast (or under the influence of alcohol).

With regards to the intention to be immature, I have no knowledge of your intentions - only the observation that attacking my character without addressing the substance of my argument is an immature act.

Gwern wrote "Your precautions do not eliminate the risk (do police officers not move?), and further, they are non sequiturs: listing possible precautions do not prove or disprove anything about teenegers' risk perceptions and receiver rewards, neither their elevation or reduction." The precautions aren't necessarily designed to eliminate the risk, though they may do so, they will however mitigate various risks, including chance of getting caught. I assume the teenager wanted to both drive the speed run and not receive a fine for doing so. That the precaution of a pre-drive of the route didn't occur supports my contention that the teenager did not overestimate the risks, in fact underestimated that risk for he was caught. With regards to rebuttal that a police officer could move, I think it's reasonable to conclude that if during a pre-drive the police officer is observed in the location it's too risky a time for the speed run, whilst if the police officer isn't observed then one has done some work in minimizing the risk of being caught.

We might consider a pre-drive of the route as an expense which made the reward vs investment unfavourable, this would reveal that the perceived reward is not so high as to overcome some (amount) of minutes of the teenagers time. Something to consider, I'd appreciate your input on that line of reasoning.

David Dobbs presents an argument that teenagers perception of reward enables their risky behaviour. I believe that argument is congruent with my original statement "In my experience teenagers [are] indeed overconfident about their own decision making ability." Perhaps I should have said, In my experience some teenagers are... to be more appropriately pedantic.

Replies from: gwern, Multiheaded, thomblake
comment by gwern · 2012-01-06T21:11:51.645Z · LW(p) · GW(p)

You attacked my character as a means of dismissing my discussion, it's a low tactic and one I didn't expect from LW, it is indeed the typical internet trash talk I mentioned....With regards to the intention to be immature, I have no knowledge of your intentions - only the observation that attacking my character without addressing the substance of my argument is an immature act.

If you still think that...

I maintain that the police officer is a better judge of the event than either the driver or you or I, hence it is more likely that it was in fact reckless driving than it wasn't reckless driving.

You wish to defer to the cop's expertise on whether it breaks the law? Excellent! I wish to defer to teens' expertise on what they enjoy. I'm glad we could come to agreement that teens overestimate risk but enjoy risky behavior much more than older people.

It is quite clear that in perceiving their rewards with such a high (subjective/personal) value, many of them have indeed made an error for one third teenager deaths are in car accidents. One should consider if death, both the risk of it and the actual occurrence of it is truly a fair price to pay for driving fast (or under the influence of alcohol).

'fairness' does not enter into it. As a transhumanist, I do not think death is a fair price for much of anything.

That aside, you repeat your 1/3 number as if it means anything in the absence of other information, as explained already. It does not.

We might consider a pre-drive of the route as an expense which made the reward vs investment unfavourable, this would reveal that the perceived reward is not so high as to overcome some (amount) of minutes of the teenagers time. Something to consider, I'd appreciate your input on that line of reasoning.

This is so far your only point worth a damn. I suggest you continue this line of reasoning, sans the fucking anecdotes.

Replies from: Peacewise, wedrifid
comment by Peacewise · 2012-01-07T03:06:46.708Z · LW(p) · GW(p)

Gwern wrote "You wish to defer to the cop's expertise on whether it breaks the law? Excellent! I wish to defer to teens' expertise on what they enjoy. I'm glad we could come to agreement that teens overestimate risk but enjoy risky behavior much more than older people."

I've been comfortable all along with accepting what teens find enjoyable, I do not agree that teens overestimate risk and frankly I'm surprised you could glean that from what I've written. Let me be clear, the article reveals that teenagers underestimate risk.

Dobbs wrote - "It was the brain scans she took while people took the test. Compared with adults, teens tended to make less use of brain regions that monitor performance, spot errors, plan, and stay focused—areas the adults seemed to bring online automatically. This let the adults use a variety of brain resources and better resist temptation, while the teens used those areas less often and more readily gave in to the impulse to look at the flickering light—just as they're more likely to look away from the road to read a text message."

Estimating risk is about planning, monitoring performance, spotting errors and staying focused, whilst the final comment quoted above provides a suitable anecdote highlighting another situation where teens are more likely to underestimate risk. Further teens more readily give in to impulse - that reveals that teens readily don't estimate risk (at all), do you see that? Estimating risk is in opposition to giving into impulse, an impulse is a sudden urge - estimation isn't something that's done suddenly, estimation is calculated by examining the context.

Gwern continues with "'fairness' does not enter into it. As a transhumanist, I do not think death is a fair price for much of anything."

I used the term "fair" in the context of the gain outweighs the cost, it's a colloquialism you obviously understand since you use it yourself, hence your comment "fairness' does not enter into it" is false. However, you're quite right that death isn't a fair price for much of anything, providing support for the risk of death not being much of a fair price for anything either... again you just keep destroying your own argument.

So on one hand we've got teens who are shown in your quoted article to anecdotally underestimate risk and also shown in research to utilise less brain processes that estimate risk and on the other hand its been shown in other research presented in the same article that teens place a higher value on rewards - all of that leads to the inescapable conclusion that some teens are overconfident in their decision making ability.

It's become apparent to me that this discussion is an example of the disconfirmation bias. Perhaps you'd care to follow the procedure for minimizing/removing disconfirmation bias before you make another post, I have already done so several times.

Replies from: gwern, Grognor
comment by gwern · 2012-01-09T21:16:17.200Z · LW(p) · GW(p)

Estimating risk is about planning, monitoring performance, spotting errors and staying focused, whilst the final comment quoted above provides a suitable anecdote highlighting another situation where teens are more likely to underestimate risk. Further teens more readily give in to impulse - that reveals that teens readily don't estimate risk (at all), do you see that? Estimating risk is in opposition to giving into impulse, an impulse is a sudden urge - estimation isn't something that's done suddenly, estimation is calculated by examining the context.

This is just missing the point. The mind is what the brain does. If a teenager chooses the higher reward and glances away, then by definition the areas involved in inhibition etc aren't going to be as busy! If they were equally busy in both the glancers and non-glancers, no one would be discussing them in the first place!

However, you're quite right that death isn't a fair price for much of anything, providing support for the risk of death not being much of a fair price for anything either... again you just keep destroying your own argument.

Unfortunately, we pay with death for both action and inaction. Destroying my own argument indeed. If you seriously mean that, then you must mean that no risk should ever be taken, which is not a position many will sympathize with.

So on one hand we've got teens who are shown in your quoted article to anecdotally underestimate risk and also shown in research to utilise less brain processes that estimate risk and on the other hand its been shown in other research presented in the same article that teens place a higher value on rewards - all of that leads to the inescapable conclusion that some teens are overconfident in their decision making ability.

Nothing inescapable about it. What we have is a worthless anecdote you insist on supporting your position, extremely strong evidence against underestimation of risk, brain-imaging results you do not understand, none of which forces the conclusion of overconfidence as opposed to teens intrinsically having higher rewards just as they intrinsically oversleep and all the over changes that go with puberty and being young adults.

It's become apparent to me that this discussion is an example of the disconfirmation bias.

Or, as it is more commonly known, the confirmation bias.

Replies from: Peacewise
comment by Peacewise · 2012-01-10T08:53:28.218Z · LW(p) · GW(p)

Thanks gwern for returning to the discussion, cheers.

This is just missing the point. The mind is what the brain does. If a teenager chooses the higher reward and glances away, then by definition the areas involved in inhibition etc aren't going to be as busy! If they were equally busy in both the glancers and non-glancers, no one would be discussing them in the first place!

Perhaps I am missing the point. I reason that one point is that if the areas involved in inhibition aren't busy, then the person isn't estimating risk, in context, they are instead being overconfident in their decision making, they've not judged the situation, instead acting in impulse.

Unfortunately, we pay with death for both action and inaction. Destroying my own argument indeed. If you seriously mean that, then you must mean that no risk should ever be taken, which is not a position many will sympathize with.

Well no actually, I do seriously believe that you've destroyed your own argument and I don't mean that no risk should be ever taken, instead I mean that when the risk is death of oneself or others then the risk is so high as to outweigh most, if not all rewards.

Nothing inescapable about it. What we have is a worthless anecdote you insist on supporting your position, extremely strong evidence against underestimation of risk, brain-imaging results you do not understand, none of which forces the conclusion of overconfidence as opposed to teens intrinsically having higher rewards just as they intrinsically oversleep and all the over changes that go with puberty and being young adults.

The evidence isn't extremely strong against underestimation of risk, Dobbs wrote about growing from teenager to adult... "When this development proceeds normally, we get better at balancing impulse, desire, goals, self-interest, rules, ethics, and even altruism, generating behavior that is more complex and, sometimes at least, more sensible. But at times, and especially at first, the brain does this work clumsily. It's hard to get all those new cogs to mesh."

If one gets better at balancing those things during development, ie. growing up, that reveals one has a lack in balancing those things, which are to do with judgement - and poor judgement is one form of overconfidence. http://en.wikipedia.org/wiki/Overconfidence_effect

Dobbs goes on to write, "This let the adults use a variety of brain resources and better resist temptation, while the teens used those areas less often and more readily gave in to the impulse to look at the flickering light..." this reveals that teens give in to impulse, which is about lacking judgement which goes to them being overconfident.

Dobbs continues "If offered an extra reward, however, teens showed they could push those executive regions to work harder, improving their scores."

This draws out a reason why they underestimate risk - because if the reward isn't "extra" or perceived as higher to the teen, then they likely won't push those executive regions to work harder, as is revealed in the articles previous paragraph, which I haven't quoted.

Dobbs continues "Add stakes that the teen cares about, however, and the situation changes. In this case Steinberg added friends: When he brought a teen's friends into the room to watch, the teen would take twice as many risks, trying to gun it through lights he'd stopped for before. The adults, meanwhile, drove no differently with a friend watching."

This can actually be interpreted either way, Steinberg chooses a higher reward, whilst one can just as reasonably choose an underestimation of risk – a rationale for which is alluded to in article is that social feelings/thoughts are more sensitive for the teenagers, hence the brain is more focussed on social cognition than risk estimation (as the risk estimation processes require more effort), hence less brain cycles on risk estimation – hence risk underestimation.

If I may use a real world scenario based upon the research and one that does happen, i.e. not fictional. When a teenagers friends are brought into their car, the teen would take twice (or some other >1 multiplier) as many risks, trying to gun it through lights he'd stopped at before. (rephrasing dobbs, reasonably I believe) It's known that some teenage passengers and drivers encourage or engage in risky driving for the thrill, that includes the social aspect of a shared thrill - in this situation the risk has also multiplied for now the death of passengers should be taken into account, whilst the reward has also increased. Might be zero sum, don't know, more research needed, but it’s clear that both reward and risk have increased due to the presence of others.

Now I hope that all you guys can see, this isn't a case of me being some tired old man who doesn't understand the joys of risk taking and thrill seeking. One should not forget that whilst I may be a somewhat respectable sober adult, I have already been through the teenage years under discussion and hence am able to recall those feelings that have been presented as if I am incapable of relating to. On the other hand a thrill seeking teenager is likely to be less aware of what it is to be a respectable sober adult, having never been one. In short I've been both a thrill seeking teenager, a thrill seeking adult and also a respectable sober adult. I do see both sides of this discussion.

Replies from: thomblake, gwern
comment by thomblake · 2012-01-10T15:22:40.146Z · LW(p) · GW(p)

On the other hand a thrill seeking teenager is likely to be less aware of what it is to be a respectable sober adult, having never been one. In short I've been both a thrill seeking teenager, a thrill seeking adult and also a respectable sober adult. I do see both sides of this discussion.

That's exactly what I'd expect a respectable sober adult to say.

Replies from: Peacewise
comment by Peacewise · 2012-01-10T16:14:47.713Z · LW(p) · GW(p)

That's exactly what I'd expect a respectable sober adult to say.

Then you have the fortunate ability to accurately predict accurate statements.

comment by gwern · 2012-01-10T19:46:30.665Z · LW(p) · GW(p)

instead I mean that when the risk is death of oneself or others then the risk is so high as to outweigh most, if not all rewards.

And how does one judge when the reward is outweighed? Hm, that wouldn't be subjective would it...?

If one gets better at balancing those things during development, ie. growing up, that reveals one has a lack in balancing those things, which are to do with judgement - and poor judgement is one form of overconfidence.

Never reason from a price change. Are teenagers overconfident about how much daytime sleep they need, and their circadian rhythm shifts due to them getting "better at balancing impulse, desire, goals, self-interest, rules, ethics, and even altruism, generating behavior that is more complex and, sometimes at least, more sensible"?

Dobbs continues "If offered an extra reward, however, teens showed they could push those executive regions to work harder, improving their scores."

This draws out a reason why they underestimate risk - because if the reward isn't "extra" or perceived as higher to the teen, then they likely won't push those executive regions to work harder, as is revealed in the articles previous paragraph, which I haven't quoted.

This practically demonstrates the opposite: that the reward is the important part! What rewards do experimenters usually offer? Pretty lousy ones. Why is it surprising that teens might not work as hard as conscientious saps - I mean, mature adults. (I am reminded of how money rewards can improve IQ test performance by half a standard deviation or so.) This is like the usual criticism of the PISA test scores: of course Americans will underperform, since not a single one of the test-takers cares about what score they get. Incentives matter, which is what I've been saying all along.

This can actually be interpreted either way, Steinberg chooses a higher reward, whilst one can just as reasonably choose an underestimation of risk – a rationale for which is alluded to in article is that social feelings/thoughts are more sensitive for the teenagers, hence the brain is more focussed on social cognition than risk estimation (as the risk estimation processes require more effort), hence less brain cycles on risk estimation – hence risk underestimation.

Wow. So your explanation for a clear-cut reward link is... they get distracted and can't estimate risk as accurately.

Replies from: Peacewise
comment by Peacewise · 2012-01-11T02:10:37.868Z · LW(p) · GW(p)

gwern, looks like you haven't been understanding a particular point.

The article reveals that reward is why the teenagers underestimate risk. The article reveals that teens perception of reward motivates their impulsiveness.

Incentives matter, which is what I've been saying all along.

Indeed, that's a point I agree with and have right from my very first rebut in this discussion. The incentives provide motivation for underestimating risk.

Wow. So your explanation for a clear-cut reward link is... they get distracted and can't estimate risk as accurately.

That's one way of summarising what the article is proposing.

Replies from: Vaniver
comment by Vaniver · 2012-01-11T02:18:24.867Z · LW(p) · GW(p)

The article reveals that reward is why the teenagers underestimate risk. The article reveals that teens perception of reward motivates their impulsiveness.

No. The value of a decision is gain minus cost; if the cost remains the same but the gain increases, then that can swing the value of a decision from negative to positive. Thus, they can be more impulsive while maintaining the same beliefs about risk.

Replies from: Peacewise
comment by Peacewise · 2012-01-11T02:44:15.722Z · LW(p) · GW(p)

Thus, they can be more impulsive while maintaining the same beliefs about risk.

I'll unpack that... Thus, they can be overconfident while maintaining the same beliefs about risk. Being impulsive is being overconfident, impulsive is a lack of estimating risk, which is underestimating risk.

Replies from: Vaniver
comment by Vaniver · 2012-01-11T03:06:31.014Z · LW(p) · GW(p)

Being impulsive is being overconfident, impulsive is a lack of estimating risk

I think we're looking at different dictionaries, so I'll abandon the word impulsive and try with a more object-level phrase. They can drive less carefully while maintaining the same beliefs about risk.

Replies from: Peacewise
comment by Peacewise · 2012-01-11T15:20:03.773Z · LW(p) · GW(p)

I think we're looking at different dictionaries, so I'll abandon the word impulsive and try with a more object-level phrase.

Hilarious, the point you have abandoned has +2, whilst my point that forced the abandoning still has -1. anyways...

They can drive less carefully while maintaining the same beliefs about risk.

and if those same beliefs are already an underestimation of risk? strike 1, just clipped the outside of the plate.

Let's unpack that last quote in context of driving... a-yawn-gain. they can drive less carefully. Less carefully, is about less care - what is "care", that's about

Care = Feel concern or interest; attach importance to something: "they don't care about human life". (dictionary.com)

So they feel less concern, they attach less importance to driving. What's the key word there, hmmm? "Less" well that's a term, in context that goes with "under"-estimate. Do you think? I do. Strike 2 - straight up the middle of the plate. Batter says, I didn't see that. Too bad says the ref.

Let's examine the opposite side, to include a process for minimising disconfirmation bias. They drive less carefully. Ok, I'm flipping my brain. The less carefully has nothing to do with underestimating risk, actually in this flip it's about overestimating risk... why do I say overestimate - well apparently that's part of the argument opposing my viewpoint, check above.

Well what's the dictionary say about what "Over estimate" means

o·ver·es·ti·mate/ˌōvərˈestəˌmāt/ Verb:
Estimate (something) to be better, larger, or more important than it really is. (dictionary.com)

hang on, hang on - overestimate = estimate something to be more important than it really is. Does overestimate sound at all like "less care"? No it doesn't, contradiction found, conclusion is Driving less carefully is about underestimating risk. Strike 3. Yer outta here!

Now, here's a thing. When the teenagers judge the reward highly, sufficiently highly to outweigh the risk of death - they have underestimated the risk. Perception of reward and risk are not in opposition, they go hand in hand.

Now let's look at the rest of the sentence.

They can drive less carefully while maintaining the same beliefs about risk.

The implication in context, is that it's reward driving the behaviour, supposedly being the entire reason for the behaviour, one significant context of the reward perception was peer involvement (see article). Let's try that one.

They can drive less carefully with more people in the car, while maintaining the same beliefs about risk, because they perceive the rewards are higher.

That fits the counter argument to my viewpoint... but hang on, now with more people in the car the risk of death is multiplied. So factually the risk has increased - yet the behaviour is supposedly all due to the reward, now if the behaviour is truly all to do with the reward, then yep the teen has discounted the risk - for the risk increased and it's not changing the behaviour.

So in that situation we've got another example where a teen has underestimated the risk due to a perception of a higher reward.

Am I being too anecdotal for you guys? Of course, discount outgroup behaviour whilst permitting the same ingroup. The article is itself filled with anecdotes... maybe we should just dismiss the entire article... stop press no no, don't do that there's no counter to my op then, lets just pick and choose the parts of it that support the counter, dismiss those that don't - both in the research and the anecdotes.

Please by all means, chuck up the -1, I'm considering them badges of honour now.

Replies from: None, thomblake
comment by [deleted] · 2012-01-11T15:58:54.276Z · LW(p) · GW(p)

I think we're all well past the point of expecting you to actually read and/or seriously consider anything. However, in case other people are still reading this thread:

Hilarious, the point you have abandoned has +2, whilst my point that forced the abandoning still has -1. anyways...

Ignore the fact that the parent abandoned a word, not a point. Karma never has been, and never ought to be, about deciding the correctness of arguments. Also, the usual litany of objections to people mindlessly invoking karma. Downvoted in accordance with my policy.

Replies from: Peacewise
comment by Peacewise · 2012-01-11T16:45:28.614Z · LW(p) · GW(p)

thanks for the link paper-machine, that's quite a reasonable policy.

If I wasn't downvoted to such a degree that I have no opportunity to downvote, I might consider implementing it. I'll certainly use the concept to more thoroughly mitigate my annoyance about those unable to follow argument.

I'll up vote you in accordance with my policy. Which is that if a person says a single useful thing, regardless of the rest of their post, I'll give it a +1.

My reasoning for this policy is twofold. I reject the negativity that is encouraged by criticism and it's aim of proving or showing that some one is wrong, rather than proving oneself right. I accept that when one focuses upon the positive, or worthwhile components of someone's beliefs/actions/arguments one creates a valuable synergy that encourages a pathway towards truth and understanding.

Sometimes I don't implement my own policy, but hey, it's all a work in progress.

On reflection the sites name "lesswrong" really should have set off an alarm bell. I'm not interested particularly in being lesswrong. I am interested in being moreright.

Positive psychology and educational psychology have shown that positivity contributes more readily to learning than negativity.

Replies from: Vaniver
comment by Vaniver · 2012-01-11T17:15:19.962Z · LW(p) · GW(p)

On reflection the sites name "lesswrong" really should have set off an alarm bell. I'm not interested particularly in being lesswrong. I am interested in being moreright.

The name is a deliberate choice, and it's rooted in a belief in the difficulty of being completely right. It seeks to minimize arrogance and maximize doubt. At the start of every post, I try to imagine the ways that I am currently being wrong, and reduce those.

For example, my first reaction to this comment was to pull out my dictionary and argue that my use of "impulsive" was right, because I knew what I meant when I wrote it and could find that meaning in a dictionary. Instead, I decided that it takes two to communicate, and that if you disagreed with the implications of the word, it was the wrong word to choose. So I abandoned the word in an attempt to become less wrong.

Positive psychology and educational psychology have shown that positivity contributes more readily to learning than negativity.

I agree with you that positivity is generally more powerful than negativity; that's why I try to be positive. Even so, negativity has its uses.

Replies from: Peacewise, fubarobfusco
comment by Peacewise · 2012-01-11T17:38:27.137Z · LW(p) · GW(p)

Vaniver. Mate. I accept that you believe

It seeks to minimize arrogance and maximize doubt.

but I dispute that it achieves those. I believe instead that it maximises arrogance and maximises doubt in the others point of view, and in maximising doubt in the other persons view we minimize our doubt in our own view.

The belief that it's difficult to be completely right, encourages people to look for that gap that is "wrong" and then drive a wedge into it and expand it until it's all that's being talked about.

If 95% is correct and 5% is wrong, criticising the 5% is a means to hurting the person - they have after all gotten 95% correct. It's not rational to discount peoples feelings by focusing upon their error and ignoring their correctness. It's destructive, it breaks people. Sure some few thrive on that kind of struggle - most don't, again this is proven stuff. And I'm not going to post 10 freeking sources on that - all that's doing for me is wasting my time and providing more opportunity for others to confirm their bias by fighting against it. If someone wants to find that information it's out there.

When you (or anyone else) got a high distinction for a unit or assignment or exam, was that a moment to go, fuck - didn't remember that a pre ganglionic fibre doesn't look anything like a post gangleoic nerve (aka ds9), or was it a moment to leap for joy and go, you little ripper I got 95%!

I agree negativity has its uses, often it's about "piss off" and go away, leave me alone; sometimes that's useful, but you'll note that those fall on the arrogant side of emotions - that of self. (this will get a wedge driven in it too, heck I could drive one in, but it remains somewhat true).

Vaniver, I'd consider it a positive discussion to talk about negativity. Would you mind explaining to me where "negativity has its uses".

And to show that I consider the

It seeks to minimize arrogance and maximize doubt.

viewpoint.

Yeh, ok I get that, when we apply the concept to ourselves then we are minimizing our arrogance and maximizing our doubt. And that'll work. We'll second guess ourselves, we'll edit our posts, and re edit, and check our dictionaries and quote our sources and these are all useful things. They keep us honest. But what about when we apply those concepts to others - as is our tendency due to the self serving bias and the group serving bias?

Replies from: thomblake, thomblake, Vaniver, nshepperd
comment by thomblake · 2012-01-11T18:33:47.046Z · LW(p) · GW(p)

The belief that it's difficult to be completely right, encourages people to look for that gap that is "wrong" and then drive a wedge into it and expand it until it's all that's being talked about.

Sure, if you're running in debate mode and thinking in terms of 'sides' or 'us versus them' and trying to 'win', then that might be something to do. Solution: don't do that in the first place.

If 95% is correct and 5% is wrong

Don't worry, everything you believe is almost certainly wrong - don't expect to find yourself in the 95% correct state any time soon. We're running on corrupted hardware in the first place, and nowhere near the end of science. We can reduce hardly any of our high-level concepts to their physical working parts.

But what about when we apply those concepts to others - as is our tendency due to the self serving bias and the group serving bias?

First, fix those too.

Replies from: Peacewise
comment by Peacewise · 2012-01-11T18:49:46.474Z · LW(p) · GW(p)

Sure, if you're running in debate mode and thinking in terms of 'sides' or 'us versus them' and trying to 'win', then that might be something to do. Solution: don't do that in the first place.

Indeed, a valuable point. So what's up with the score keeping system of LW then. It encourages thinking in terms of sides and competition. -1, not my side, +1 my side. -1 lost, +1 won.

Don't worry, everything you believe is almost certainly wrong - don't expect to find yourself in the 95% correct state any time soon. We're running on corrupted hardware in the first place, and nowhere near the end of science. We can reduce hardly any of our high-level concepts to their physical working parts.

lol. Fair enough. I would place the 95% not on some unknown scale of what is absolutely true - that science doesn't yet know, but instead on the relative scale of what science currently knows. Does that make a difference to your point?

First, fix those too.

Yep, tough to become self less, yet still place enough value upon oneself to not be a door mat. Rudyard Kiplings "If" shows a pathway.

If neither foes nor loving friends can hurt you. If all men count with you, but none too much. http://www.kipling.org.uk/poems_if.htm

Eastern philosophy also has approaches - that are a thousand years ahead of western science.

Replies from: thomblake, Vaniver
comment by thomblake · 2012-01-11T19:07:04.268Z · LW(p) · GW(p)

Does that make a difference to your point?

Yes. The difference in perspective probably explains why Eliezer thought Less Wrong was a good name, whereas you do not. Do not compare yourself to others; "The best physicist in ancient Greece could not calculate the path of a falling apple."

Indeed, a valuable point. So what's up with the score keeping system of LW then. It encourages thinking in terms of sides and competition. -1, not my side, +1 my side. -1 lost, +1 won.

It's a hurdle to get past thinking of it in that way for some people, to be sure. It seems a worthwhile cost though, for an easy way to efficiently express approval/disapproval of a comment, combined with automatic hiding of really bad comments from casual readers.

While some people use them that way, voting should not generally be used to mean "I agree" or "I disagree". The preferred interpretation is "I would like to see [more/fewer] comments like this one" (which may yet include agreement/disagreement, but they should be minor factors as compared to quality).

comment by Vaniver · 2012-01-11T19:07:11.043Z · LW(p) · GW(p)

So what's up with the score keeping system of LW then. It encourages thinking in terms of sides and competition. -1, not my side, +1 my side. -1 lost, +1 won.

Karma allows users to easily aggregate the community opinion of their comments, and allows busy users to prioritize which comments to read. I try to make more posts like my highly upvoted posts, and less posts like my highly downvoted posts. It is common to see discussions where both users are upvoted, or discussions where both users are downvoted. When there's a large karma split between users, that's a message from the community that the users are using different modes of discussion, and one is strongly preferred to the other.

Both positive and negative options are necessary so that posts which are loved by half of the users and hated by the other half of the users have a neutral score, rather than a high score. Similarly, posts which are disliked by many users should be different from posts that everyone is indifferent to.

that are a thousand years ahead of western science.

What was the motivation behind this addition? Was it positive?

Replies from: Peacewise
comment by Peacewise · 2012-01-14T03:56:02.090Z · LW(p) · GW(p)

that are a thousand years ahead of western science.

What was the motivation behind this addition? Was it positive?

The motivation was to plant a seed... motivated by the +2 on my comment.

In my experience debiasing others who have strongly held opinions is far more effort than it's worth, a better road seems to be to facilitate them debiasing themselves. Plant the seed and move on, coming back to assess and perhaps water it later on. I don't try to cut down their tree... as it were. http://lesswrong.com/lw/7ep/practical_debiasing/5ah1?context=1#5ah1

Replies from: Vaniver
comment by Vaniver · 2012-01-14T05:55:57.910Z · LW(p) · GW(p)

But why that seed in this conversation?

It is not uncommon to see scientists who have studied Eastern philosophy. Thus, how could Eastern philsophy be a thousand years ahead of science, when it is part of science?

Replies from: Peacewise
comment by Peacewise · 2012-01-14T19:50:43.492Z · LW(p) · GW(p)

But why that seed in this conversation?

To assist in debiasing the ageism that was being expressed in the conversation.

comment by thomblake · 2012-01-11T18:44:28.350Z · LW(p) · GW(p)

When you (or anyone else) got a high distinction for a unit or assignment or exam...

I could not parse this paragraph. It might be just that it was written in the Australian idiom or something; maybe quotation marks would help.

Replies from: pedanterrific
comment by pedanterrific · 2012-01-11T20:11:12.383Z · LW(p) · GW(p)

When you (or anyone else) got a high distinction for a unit or assignment or exam, was that a moment to go, fuck - didn't remember that a pre ganglionic fibre doesn't look anything like a post gangleoic nerve (aka ds9), or was it a moment to leap for joy and go, you little ripper I got 95%!

When you (or anyone else) get a high grade on a paper or assignment or exam, is that a moment to think "Darn- I didn't remember (single obscure thing you got wrong)," or is it a moment to leap for joy and say "I got a 95! Ahaha!"?

Replies from: thomblake
comment by thomblake · 2012-01-11T20:28:35.271Z · LW(p) · GW(p)

Thanks!

Replies from: Peacewise
comment by Peacewise · 2012-01-12T01:15:17.657Z · LW(p) · GW(p)

thomblake, consider a high distinction as an A+ grade. Perhaps as along the lines of Newtonian Mechanics. It's mostly right.

comment by Vaniver · 2012-01-11T19:10:33.256Z · LW(p) · GW(p)

If 95% is correct and 5% is wrong, criticising the 5% is a means to hurting the person - they have after all gotten 95% correct.

There are many fields in which it is better to not try than to get 5% wrong. Would you go bungee jumping if it had a 5% failure rate?

Vaniver, I'd consider it a positive discussion to talk about negativity. Would you mind explaining to me where "negativity has its uses".

Mostly in discouraging behavior. As well, an important rationality skill is updating on valuable information from sources you dislike; dealing with negativity in safer circumstances may help people learn to better deal with negativity in less safe circumstances.

Replies from: Peacewise
comment by Peacewise · 2012-01-12T02:06:55.964Z · LW(p) · GW(p)

Thanks for the post on negativity Vaniver. I wouldn't go bungee jumping if it had a 5% failure rate.

Mostly in discouraging behavior...

That viewpoint can be considered as based upon Skinners model of Behaviourism, it's been shown to be less effective for learning than being positive.

Makes sense - we tend to remember what we are emotionally engaged in and what is reinforced. When the negativity is associated with the 5%, what is reinforced is that a person is "wrong", that's associated with feelings of low self efficacy and tends to discourage (most) people from the topic. When that happens they regress - not progress, they tend to get even more wrong next time as they've not stayed engaged in the topic.

...As well, an important rationality skill is updating on valuable information from sources you dislike; dealing with negativity in safer circumstances may help people learn to better deal with negativity in less safe circumstances.

I agree that an important skill is to update ones information, however the discouragement that is provoked by negativity isn't efficient in evoking updating. Confident people update their information, people who aren't attacked have no need to defend and so they remain open, openess is the key attitude for updating information. Negativity destroys and/or minimizes confidence which contributes to closing a mind.

What negativity does, in context of learning, is to encourage secrecy, resentment, avoidance and close mindedness. Again this stuff is all known as a consequence of punishment, which is what negativity - as discouraging behaviour is associated with.

Apparently a more effective way forward is to model the behaviour that one wants to encourage and ignore the behaviour one wants to discourage - extinction.

Replies from: TimS
comment by TimS · 2012-01-12T04:28:16.852Z · LW(p) · GW(p)

That viewpoint can be considered as based upon Skinners model of Behaviourism, it's been shown to be less effective for learning than being positive.

I agree that saying "Good job putting down that toy" to my 22-month-old is more effective at reducing throwing of his toys than saying "Don't throw toys." And extinction works great on tantrums.

But you seem to be overgeneralizing the point a bit. When dealing with competent adults, saying "X is wrong" is an effective way of improving the listener's beliefs. If the speaker doesn't justify the assertion, that will and should effect whether the listener changes beliefs.

Of course, this is probably bad management style. We might explain that fact about people-management by invoking psychological bias, power imbalance, or something else. But here, we're just having a discussion. No one is asserting a right to authority over anyone else.


Without necessarily asserting its truth, this just-so story/parable might help:

For various social reasons, popular kids and nerds have developed very different politeness rules. Popular kids are used to respect, so they accept everything that they hear. As a consequence, they think relatively carefully before saying something, because their experience is that what is said will be taken seriously. By contrast, nerds seldom receive social respect from their peers. Therefore, they seldom take what is said to them to heart. As a consequence, nerds don't tend to think before they speak, because their experience is that the listener will filter out a fair amount of what is said. In brief, the popular filter at the mouth, the nerds filter at the ear.

This all works fine (more or less) when communicating within type. But you can imagine the problems when a nerd says something mean to a popular, expecting that it will be filtered out. Or a popular says something only vaguely nice, but the nerd removes negative that isn't there and hears sincere and deep interest.

Replies from: Peacewise
comment by Peacewise · 2012-01-14T03:42:31.187Z · LW(p) · GW(p)

TimS, I'm glad we agree on several points, extinction and positive reinforcement of children. I wonder why these methods are espoused for children, yet tend to be used less for "competent adults". Thanks for planting the seed that I might be overgeneralizing the point a bit, I'll keep an eye on that.

I am reminded that saying "X is wrong" to an adult with a belief is ineffective in many circumstances, most notably the circumstance were the belief is a preconception, based in emotion or more specifically an irrational belief. Is this not one consequence of bias? That a person, in some cases/topics, won't update their beliefs and indeed strengthen their belief in the counterargument against the updating. Presumably you've read http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/ Which alludes to how knowledge of bias can be used dismissively, i.e. an irrational use of a rationale.

"Why logical argument has never been successful at changing prejudices, beliefs, emotions or perceptions. Why these things can be changed only through perception." De Bono, "I am right, you are wrong". De Bono discusses this extensively.

If the belief is rational, and perhaps that's one component of what you consider a "competent adult", the adult could be more open to updating the fact/knowledge - yet even this situation has a wealth of counter examples, such that there is a term for it - belief perseverance.

In my experience unsolicited advice is rarely accepted regardless of its utility and veracity. Perhaps I communicate with many closed minds, or perhaps I am merely experiencing the availability heuristic in context of our discussion.

comment by nshepperd · 2012-01-11T21:59:31.121Z · LW(p) · GW(p)

Checking dictionaries doesn't really help eliminate bias. Just saying.

comment by fubarobfusco · 2012-01-11T20:28:20.706Z · LW(p) · GW(p)

For example, my first reaction to this comment was to pull out my dictionary and argue that my use of "impulsive" was right, because I knew what I meant when I wrote it and could find that meaning in a dictionary. Instead, I decided that it takes two to communicate, and that if you disagreed with the implications of the word, it was the wrong word to choose. So I abandoned the word in an attempt to become less wrong.

Kahneman wrote, in Thinking, Fast and Slow, that he wrote a book for gossips and critics — rather than a book for movers and shakers — because people are better at identifying other people's biases than their own. I took this as meaning that his intention was to make his readers better equipped to criticize others' biases correctly; and thus, to make people who wish to avoid being criticized need to debias themselves to accomplish this.

Presumably, part of the reason that a commenter would avoid making a dictionary argument on LW is if that commenter knows that LWers are unlikely to tolerate dictionary arguments. Teaching people about biases may lead them to be less tolerant of biases in others; and if we seek to avoid doing things that are odious to our fellows, we will be forced to check our own biases before someone else checks them for us.

Knowing about biases can hurt you chiefly if you're the only one who's sophisticated about biases and can argue fluently about them. But we should expect that in an environment with a raised sanity waterline, where everyone knows about biases and is prepared to point them out, people will perpetrate less egregious bias than in an environment where they can get away with it socially.

(OTOH, I don't take this to excuse people saying "Nah nah nah, I caught you in a conjunction fallacy, you're a poopy stupid head." We should be intolerant of biased arguments, not of people who make them — so long as they're learning.)

Replies from: thomblake
comment by thomblake · 2012-01-11T20:35:59.724Z · LW(p) · GW(p)

Good point. I normally don't like accusing others of bias, and I will continue to try to refrain from doing so when I'm involved in something that looks like a debate, but I agree that it is useful information that should not be discouraged.

comment by thomblake · 2012-01-11T18:20:05.223Z · LW(p) · GW(p)

Parts of the parent comment that are particularly wrong:

Hilarious, the point you have abandoned has +2, whilst my point that forced the abandoning still has -1. anyways...

paper-machine fairly well handled that one in terms of "Rule 1 of karma is you do not talk about karma". Also, it was not a point that was abandoned, but a word. It is a common technique here to taboo a word whose definition is under dispute, since arguing about definitions is a waste of time.

Since you do not seem to understand, what happened there is that your 'unpacking' did not convey what Vaniver's statement actually was intended to convey, so Vaniver replaced the word 'impulsive' with a more object-level description less amenable to misunderstanding.

a-yawn-gain

What's the key word there, hmmm?

Strike 2 - straight up the middle of the plate. Batter says, I didn't see that. Too bad says the ref.

hang on, hang on

Strike 3. Yer outta here!

This is not a good way to communicate. If you really don't see what in this language would make someone take your arguments less seriously, someone could explain.

o·ver·es·ti·mate/ˌōvərˈestəˌmāt/ Verb: Estimate (something) to be better, larger, or more important than it really is. (dictionary.com)

hang on, hang on - overestimate = estimate something to be more important than it really is.

Perhaps you are not familiar with the study of risk, but the phrase "overestimate risk" means "estimate risk to be larger than it really is", not "more important". Either you are too ill-informed about risk analysis to be involved in this conversation, or you are trolling.

Also, appeals to the dictionary are just about the worst thing you can do in a substantive argument. If there is a misunderstanding, then definitions (whether from a dictionary or not) are useful for resolving the misunderstanding. They are really not useful to prove a point about what's actually occurring.

Am I being too anecdotal for you guys? Of course, discount outgroup behaviour whilst permitting the same ingroup. The article is itself filled with anecdotes... maybe we should just dismiss the entire article... stop press no no, don't do that there's no counter to my op then, lets just pick and choose the parts of it that support the counter, dismiss those that don't - both in the research and the anecdotes.

This is a clear violation of the principle of charity.

And as a general rule, fixing your own bias is good, but accusing others of bias is bad. We must be particularly careful to remember that knowing about biases can hurt people. EDIT: Updating on this comment: It is useful to point out examples of bias in others; but do so in a way that does not score points in a debate, to be sure you're not fooling yourself..

Please by all means, chuck up the -1, I'm considering them badges of honour now.

You should not. Votes are an indication of whether the readers of this site would like to see more comments like yours. If you're getting feedback that you're making comments we wouldn't like on our site, and you consider that a 'badge of honor', then you're a troll and should actually be banned entirely.

comment by Grognor · 2012-01-09T21:55:26.914Z · LW(p) · GW(p)

Since gwern is, well beyond what I thought was typical of him, refusing to call a horse a horse, I'm going to say it: man, you're so lame.

First understand that I noticed a disagreement going on between someone I've never seen before, a Mr. Peacewise, and a Mr. Gwern, whom I despise ever so lightly. He's a jerk on IRC, you see. It would have made me feel better for him to be wrong and you to be right, you see. I wanted that, in my gut (though not by Tarski).

But man. Gwern posts an article with a perfectly reasonable conclusion attached, and you take a slice of anecdotal evidence to say just the opposite, which just happens to precisely match your preconceptions, and then in the ensuing discussion, instead of recognizing this, you accuse gwern first of being insulting and then of being labored by cognitive biases, meanwhile with no evidence, literally none, that you are right and he is wrong.

LAME.

Replies from: Peacewise, Grognor
comment by Peacewise · 2012-01-10T07:51:25.671Z · LW(p) · GW(p)

Hey thanks Grognor, I'll take your ad hominems about both Gwern and myself with the lack of respect they deserve.

With regards to the article, neither the research in it, nor the anecdotal evidence in it support the counter claim that teenagers are not overconfident.

The article does provide a rationale for why teenagers are overconfident, all I've done is unpack that information, first using the articles anecdote, then using the articles described research. meh. One can lead a horse to water but one can't make it drink.

Replies from: shokwave, shokwave, wedrifid
comment by shokwave · 2012-01-10T08:38:37.649Z · LW(p) · GW(p)

One can lead a horse to water but one can't make it drink.

This is patently false.

Replies from: nshepperd, Peacewise, Peacewise
comment by nshepperd · 2012-01-10T13:38:50.890Z · LW(p) · GW(p)

I feel like "technically false" would be more accurate. If it's just you, the horse, and a puddle, it's surely going to be at least difficult to convince it to start slurping it up if it doesn't want to.

Replies from: wedrifid
comment by wedrifid · 2012-01-10T13:41:31.720Z · LW(p) · GW(p)

I feel like "technically false" would be more accurate. If it's just you, the horse, and a puddle, it's surely going to be at least difficult to convince it to start slurping it up if it doesn't want to.

"You can lead a horse to water but you can't make him drink if you aren't very imaginative and your resources are artificially limited".

Replies from: TheOtherDave, nshepperd
comment by TheOtherDave · 2012-01-10T14:56:04.122Z · LW(p) · GW(p)

If they're sufficiently limited, you can't even lead a horse to water.

comment by nshepperd · 2012-01-13T23:33:14.376Z · LW(p) · GW(p)

There is a sense of "drink" which encompasses raising a glass to your mouth and ingesting the liquid in it, or in the case of horses, lowering their head to the water, taking some into the mouth and swallowing it.

Sticking a tube down a horse's throat certainly achieves something, but not precisely this.

Replies from: dlthomas
comment by dlthomas · 2012-01-13T23:54:52.077Z · LW(p) · GW(p)

Presumably your goal is a hydrated horse, however.

Replies from: wedrifid, CuSithBell
comment by wedrifid · 2012-01-14T00:20:07.592Z · LW(p) · GW(p)

You could alo be trying to drug the horse or provide him with nutritional supplementation in liquid form.

comment by CuSithBell · 2012-01-14T00:22:02.529Z · LW(p) · GW(p)

Another example of the danger of explicit goal maximizers.

comment by Peacewise · 2012-01-11T15:32:33.589Z · LW(p) · GW(p)

drink/driNGk/ Verb:
Take (a liquid) into the mouth and swallow.

I know you're having a bit of a laugh, however force feeding is not drinking.

As the wiki you link quite clearly shows, force feeding is having the tube passed through the nose or mouth into the stomach. Whilst drinking, in context of a horse, doesn't include a tube, and does include the liquid going into the mouth and being swallowed.

comment by Peacewise · 2012-01-23T13:38:29.937Z · LW(p) · GW(p)

Do you recall this line in the Matrix?

MORPHEUS: I told you that I can only show you the door. You have to step through it.

Thats what i hoped would be understood by the previous, one can lead a horse to water but one cant make it drink.

comment by shokwave · 2012-01-10T08:49:50.293Z · LW(p) · GW(p)

Just politely, I don't think this style is going to have good results for you. A more robust approach would be: when something does not deserve your respect, ignore it.

Replies from: Peacewise
comment by Peacewise · 2012-01-10T09:29:18.422Z · LW(p) · GW(p)

You are indeed correct shokwave, thanks.

comment by wedrifid · 2012-01-10T09:42:55.220Z · LW(p) · GW(p)

Hey thanks Grognor, I'll take your ad hominems about both Gwern and myself with the lack of respect they deserve.

Those are insults, not ad hominem fallacies. It is a social violation and not a logical one.

comment by Grognor · 2012-01-11T08:42:42.318Z · LW(p) · GW(p)

I'm just going to point out that I'm surprised that this comment of mine has not only been voted up, but voted up quite strongly, considering it rests in a nest of 'hidden' comments. I fully expected this comment to rest somewhere in the neighborhood of -2 karma.

I'm not sure whether to call this a pleasant surprise. I'm really just confused.

Replies from: wedrifid
comment by wedrifid · 2012-01-11T09:18:46.943Z · LW(p) · GW(p)

I'm just going to point out that I'm surprised that this comment of mine has not only been voted up, but voted up quite strongly, considering it rests in a nest of 'hidden' comments. I fully expected this comment to rest somewhere in the neighborhood of -2 karma.

I recall being surprised too, back when I saw your comment (then at +6). Usually I expect that sort of comment to be negative.

Mind you I upvoted it myself because you nailed it. You were a little bit of a jerk in the comment but I thought it was entirely appropriate to the circumstance - and balanced by at least agreeing with Gwern in the current battle.

comment by wedrifid · 2012-01-10T09:47:32.072Z · LW(p) · GW(p)

This is so far your only point worth a damn. I suggest you continue this line of reasoning, sans the fucking anecdotes.

Anecdotes deserve expletives these days? Those must be some dastardly anecdotes.

Replies from: None
comment by [deleted] · 2012-01-10T10:11:01.607Z · LW(p) · GW(p)

They derailed a whole thread into a giant clusterfuck of general nonsense. Is that sufficiently dastardly?

Replies from: wedrifid
comment by wedrifid · 2012-01-10T10:12:38.607Z · LW(p) · GW(p)

They derailed a whole thread into a giant clusterfuck of general nonsense. Is that sufficiently dastardly?

Don't know. The wall-of-text nonsense had already turned me off! I didn't get as far as reading anecdotes.

Replies from: None
comment by [deleted] · 2012-01-10T10:13:38.143Z · LW(p) · GW(p)

The wall-of-text nonsense had already turned me off!

You and I, we finally agree on something. :)

Replies from: wedrifid
comment by wedrifid · 2012-01-10T10:40:47.826Z · LW(p) · GW(p)

You and I, we finally agree on something. :)

I honestly didn't know we usually disagreed. Probably wouldn't make it on a top ten list of "Most Likely To Disagree With Wedrifid".

Replies from: thomblake
comment by thomblake · 2012-01-10T15:25:22.220Z · LW(p) · GW(p)

Probably wouldn't make it on a top ten list of "Most Likely To Disagree With Wedrifid".

And I say that paper-machine would make it on a top ten list of "Most Likely To Disagree With Wedrifid" - so there!

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-10T15:35:30.945Z · LW(p) · GW(p)

I suppose you could run a poll.

comment by Multiheaded · 2012-01-10T10:21:16.810Z · LW(p) · GW(p)

Gee, and I thought I've been acting like a cunt and damaging LW's standards (for a few days lately). Nice to see that better people can behave themselves worse.

comment by thomblake · 2012-01-10T15:17:16.813Z · LW(p) · GW(p)

Gwern wrote "

To quote, use a greater-than sign at the beginning of the line. For more formatting help, click "show help" below the comment box.

Replies from: Peacewise
comment by Peacewise · 2012-01-10T16:01:33.741Z · LW(p) · GW(p)

To quote, use a greater-than sign at the beginning of the line. For more formatting help, click "show help" below the comment box.

Thanks thomblake, I'll test that just now.

comment by Richard_Hollerith · 2007-10-04T15:11:59.000Z · LW(p) · GW(p)

I second Robin's question.

comment by Richard_Hollerith · 2007-10-04T16:03:58.000Z · LW(p) · GW(p)

I'd also like to learn whether the experimental finding holds for a wide variety of decisions. (Eliezer mentioned only picking a job offer.)

comment by Senthil · 2007-10-04T17:10:18.000Z · LW(p) · GW(p)

Aren't people consistently underconfident when it comes to their money? Everybody does something, invest in something, but aren't really sure about it even after they've done it. It's in its most extreme when it comes to the stock market.

Another instance is when people approach members of the opposite sex who they think are attractive. They consistently misunderestimate themselves.

Otherwise it depends on what their used to, like people in technology are underconfident when it comes to negotiation and so forth.

comment by Rick_Smith · 2007-10-04T17:11:39.000Z · LW(p) · GW(p)

In the case of Divorce, the reasons cannot always be taken as evidence for the marriage having been a mistake to begin with.

Things happen and people change.

comment by The_Decision_Strategist · 2007-10-04T17:14:42.000Z · LW(p) · GW(p)

This is an interesting idea and doesn't surprise me given thin-slicing behavior and the like. But the research itself seems a little thin. Where is the actual testing versus a control group? What about other decisions that don't involve jobs?

Also, I think probably we know what we will choose 99% of the time because we make the decision instantaneously. The real question is whether we do this even on decisions that we don't consciously know what we are going to choose. Are we as accurate in those decisions?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-05T16:06:50.000Z · LW(p) · GW(p)

It is nice to have a clear example of where people are consistently underconfident. Are there others?

People tend to take into account the magnitude of evidence (how extreme is the value?) while ignoring its reliability, and they also tend to be bad at combining multiple pieces of evidence. So another good way to generate underconfidence is to give people lots of small pieces of reliable evidence. (I believe it's in the same paper, "The Weighing of Evidence and the Determinants of Confidence".)

comment by bloix · 2007-10-07T02:28:21.000Z · LW(p) · GW(p)

I recall having an argument over dinner with a friendly acquaintance about an unimportant but interesting problem. I thought about it for few days and decided he was right. I've hated him ever since.

Replies from: DanielLC
comment by DanielLC · 2010-09-05T18:55:05.879Z · LW(p) · GW(p)

And now we're curious. What was the problem?

comment by Marius_Gedminas · 2008-06-08T01:16:16.000Z · LW(p) · GW(p)

Are you they as available, in your heuristic estimate of your competence?

I'm unable to parse this sentence.

Replies from: bigjeff5
comment by bigjeff5 · 2011-02-15T06:04:21.305Z · LW(p) · GW(p)

Drop the "you" and see the linked "Availability Heuristic".

comment by suecochran · 2011-04-10T22:41:08.967Z · LW(p) · GW(p)

I used to have a button that said "If you haven't changed your mind lately, how do you know you've still got one?" I really liked that sentiment.

It's very easy to get comfortable with our opinions and beliefs, and uncomfortable about any challenge to them. As I've posted elsewhere, we often identify our "selves" with our "beliefs", as if they "were" us. Once we can separate our idea of "self" as different from "that which our self currently believes", it becomes easier to entertain other thoughts, and challenges from others, to our beliefs and opinions. If we are comfortable and secure in our own selves, then we can discuss dispassionately the ideas that contradict what we have previously held to be true. It is the only way that we can learn, that we can take in new and different ideas without that being a blow to our ego. Identifying our selves with our thoughts, opinions, beliefs, blocks us, threatens us, so that we get stuck with our old ways of doing things and framing things, and we don't grow and change with ease.

comment by Martok · 2012-04-08T22:42:08.573Z · LW(p) · GW(p)

A lot of people probably already know that, it's a familiar "deep wisdom", but anyway: you can use this not-changing of your mind to help you with seemingly complicated decisions that you ponder over for days. Simply assign the possible answers and flip a coin (or roll a dice, if you need more than 2). It doesn't matter what the result is, but depending on wether it matches your already-made decision you will either immediately reject the coin's "answer" or not. That tells you what your first decision was, unclouded by any attempts to justify the other option(s).

Now, if you've trained your intuition (aka have the right set of Cached Thoughts), that answer will be the correct or better one. Or, as has happened to me more than once, you realize that both alternatives are actually wrong and your mind already came up with a better solution.

Replies from: DaFranker
comment by DaFranker · 2012-08-01T16:08:59.523Z · LW(p) · GW(p)

Without knowing the terms or technical explanation for it, this is what I have always been doing automatically for as long as I can remember making decisions conciously (generously apply confidence margin and overconfidence moderation proportional to applicable biases). However, upon reading the sequences here, I realize that several problems I have identified in my thought strategies actually stem from my reliance on training my intuition and subconscious for what I now know to be simply better cached thoughts.

It turns out that no matter how well you organize and train your Caches and other automatic thinking, belief-forming and decision-making processes, some structural human biases are virtually impossible to eliminate by strictly relying on this method. What's more, by having relied on this for so long, I find myself having even more difficulty training my mind to think better.

comment by PerennialChild · 2012-06-15T16:46:20.811Z · LW(p) · GW(p)

That's true. Matters are not helped by the value society places on commitment and consistency. When we do, in fact, change our minds, we are more often than not labeled as "wishy-washy," or some similarly derogatory term.

comment by ictoan · 2012-11-05T18:54:13.712Z · LW(p) · GW(p)

This article reminds me of the movie "Inception"... once an idea is planted it is hard to get it out.

comment by WedgeOfCheese (DiamondSoul) · 2014-07-09T16:04:09.541Z · LW(p) · GW(p)

As Eliezer says, on short time scales (days, weeks, months) we change our minds less often than we expect to. However, it's worth noting that, on larger time scales (years, decades) the opposite seems to be true. Also, our emotional state changes more frequently than we expect it to, even on short time scales. I can't seem to recall my exact source on this second point at the moment (I think it was some video we watched in my high school psychology class), though, anecdotally, I've observed it to be true in my own life. Like, when I'm feeling good, I may think thoughts like "I'm a generally happy person", or "my current lifestyle is working very well, and I should not change it", which are falsifiable claims/predictions that are based on the highly questionable assumption that my current emotional state will persist into both the near and distant future. Similarly, I may think the negations of such thoughts when I'm feeling bad. As a result, I have to remind myself to be extra skeptical/critical of falsifiable claims/predictions that agree too strongly my current emotional state.

comment by Mirza Herdic (mirza-herdic) · 2023-02-01T06:48:59.650Z · LW(p) · GW(p)

I would say that the study by Griffin and Tversky is incomplete. The way I see it, we have an inner "scale" of the validity of evidence and decide based on that. As was pointed out in one of the previous posts, we should bet on an event 100% of the time if the event is more likely than the alternatives. Something similar is happening here, where if we are more than 50% sure that job A is better than job B, we should pick job A. Given that the participants were 66% sure, this would mean that there is a low a priori probability for them to change their minds. If we assume a normal distribution for the "scale" of evidence in our brains, we get to the fact that there is indeed a very small chance of the participants changing their minds, obviously being a 2 sigma event.

 

If my hypothesis is correct, in a new study in which the participants are a priori 50% sure about the job they want to choose, they should change their minds more than the 4% in this example, much more actually. Given that we do have a mechanism in our minds that makes us stick to our decisions, especially in cases when we are around 50% sure, which stops us from changing our minds constantly and behaving erratically, I would hypothesize that the participants wouldn't change their minds 50% of the time, but it would probably be in the region of 40% - 50%. I would also expect that when the participants are 80% or 90% sure a priori, that they would still change their minds in maybe 1% or 2% of the cases because we are usually more sure of our answers than we should be.

 

All in all, I think that it is perfectly rational that if you are 66% sure about something you make that decision in 90% to 99% of the cases. Being 80% sure about something should not mean that you should choose the alternative in 20% of the cases.

comment by Flow · 2024-02-20T11:13:33.032Z · LW(p) · GW(p)

The principle of the bottom line

 

I think "The Bottom Line" here is meant to link to the essay [LW · GW].

comment by Martin Randall (martin-randall) · 2024-11-12T04:45:19.279Z · LW(p) · GW(p)

BLUF: The cited paper doesn't support the claim that we change our minds less often than we think, and overall it and a paper it cites point the other way. A better claim is that we change our minds less often than we should.

The cited paper is freely downloadable: The weighing of evidence and the determinants of confidence. Here is the sentence immediately following the quote:

It is noteworthy that there are situations in which people exhibit overconfidence even in predicting their own behavior (Vallone, Griffin, Lin, & Ross, 1990). The key variable, therefore, is not the target of prediction (self versus other) but rather the relation between the strength and the weight of the available evidence.

The citation is to Vallone, R. P., Griffin, D. W., Lin, S., & Ross, L. (1990). Overconfident Prediction of Future Actions and Outcomes by Self and Others. Journal of Personality and Social Psychology, 58, 582-592.

Self-predictions are predictions

Occam's Razor [LW · GW] says that our mainline prior should be that self-predictions behave like other predictions. These are old papers and include a small number of small studies, so probably they don't shift beliefs all that much. However much you weigh them, I think they weigh in favor of Occam's Razor.

In Vallone 1990, 92 Students were asked to prediction their future actions later in the academic year, and those of their roommate. An example prediction, will you go to the beach? The greater time between prediction and result makes this a more challenging self-prediction. Students were 78.7% confident and 69.1% accurate for self-prediction, compared to 77.4% confident and 66.3% accurate for other-prediction. Perhaps evidence for "we change our minds more often than we think".

I think more striking is that both self and other predictions had a similar 10% overconfidence. They also had similar patterns of overconfidence - the overconfidence was clearest when it went against the base rate, and students underweighted the base rate when making both self-predictions and other-predictions.

As well as Occam's Razor, self-predictions are inescapably also predicting other future events. Consider the job offer case study. Will one of the employers increase the compensation during negotiation? What will they find out when they research the job locations? What advice will they receive from their friends and family? Conversely, many other-predictions are entangled with self-predictions. It's hard to conceive how we could be underconfident in self-prediction, overconfident in other-prediction, and not notice when the two biases clash.

Short-term self-predictions are easier

In Griffin 1992, the first test of "self vs other" calibration is study 4. This is a set of cooperate/defect tasks where the 24 players predict their future actions and their partner's future actions. They were 84% confident and 81% accurate in self-prediction but 83% confident and 68% accurate in other-prediction. So they were well-calibrated for self-prediction, and over-confident for other-prediction. Perhaps evidence for "we change our minds as often as we think".

But self-prediction in this game is much, much easier than other-prediction. 81% accuracy is surprisingly low - I guess that players were choosing a non-deterministic strategy (eg, defect 20% of the time) or were choosing to defect based in part on seeing their partner. But I have a much better idea of whether I am going to cooperate or defect in a game like that, because I know myself a little, and I know other people less.

The next study in Griffin 1992 is a deliberate test of the impacts of difficulty on calibration, where they find:

A comparison of Figs. 6 and 7 reveals that our simple chance model reproduces the pattern of results observed by Lichtenstein & Fischhoff (1977): slight underconfidence for very easy items, consistent overconfidence for difficult items, and dramatic overconfidence for “impossible” items.

Self-predictions are not self-recall

If someone says "we change our minds less often than we think", they could mean one or more of:

  • We change our minds less often than we predict that we will
  • We change our minds less often that we model that we do
  • We change our minds less often that we recall that we did

If an agent has a bad self-model, it will make bad self-predictions (unless its mistakes cancel out). If an agent has bad self-recall it will build a bad self-model (unless it builds its self-model iteratively). But if an agent makes bad self-predictions, we can't say anything about its self-model or self-recall, because all the bugs can be in its prediction engine.

Instead, Trapped Priors

This post precedes the excellent advice to Hold Off on Proposing Solutions [? · GW]. But the correct basis for that advice is not that "we change our minds less often than we think". Rather, what we need to solve is that we change our minds less often than we should.

In Trapped Priors as a basic problem of rationality [LW · GW], Scott Alexander explains one model for how we can become stuck with inaccurate beliefs and find it difficult to change our beliefs. In these examples, the person with the trapped prior also believes that they are unlikely to change their beliefs.

  • The person who has a phobia of dogs believes that they will continue to be scared of dogs.
  • The Republican who thinks Democrats can't be trusted believes that they will continue to distrust Democrats.
  • The opponent of capital punishment believes that they will continue to oppose capital punishment.

Reflections

I took this post on faith when I first read it, and found it useful. Then I realized that, just from the quote, the claimed study doesn't support the post, people considering two job offers are not "within half a second of hearing the question". It was that confusion that pushed me to download the paper. I was surprised to find the Vallone citation that led me to draw the opposite conclusion. I'm not quite sure what happened in October 2007 (and "on August 1st, 2003, at around 3 o’clock in the afternoon"). Still, the sequence continues to stand with one word changed from "think" to "should".