Your Strength as a Rationalist

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-08-11T00:21:20.000Z · LW · GW · Legacy · 123 comments

Contents

123 comments

The following happened to me in an IRC chatroom, long enough ago that I was still hanging around in IRC chatrooms. Time has fuzzed the memory and my report may be imprecise.

So there I was, in an IRC chatroom, when someone reports that a friend of his needs medical advice. His friend says that he’s been having sudden chest pains, so he called an ambulance, and the ambulance showed up, but the paramedics told him it was nothing, and left, and now the chest pains are getting worse. What should his friend do?

I was confused by this story. I remembered reading about homeless people in New York who would call ambulances just to be taken someplace warm, and how the paramedics always had to take them to the emergency room, even on the 27th iteration. Because if they didn’t, the ambulance company could be sued for lots and lots of money. Likewise, emergency rooms are legally obligated to treat anyone, regardless of ability to pay.1 So I didn’t quite understand how the described events could have happened. Anyone reporting sudden chest pains should have been hauled off by an ambulance instantly.

And this is where I fell down as a rationalist. I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming. And the Medical Establishment was always right. Every single time. I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack. So I said into the IRC channel, “Well, if the paramedics told your friend it was nothing, it must really be nothing—they’d have hauled him off if there was the tiniest chance of serious trouble.”

Thus I managed to explain the story within my existing model, though the fit still felt a little forced . . .

Later on, the fellow comes back into the IRC chatroom and says his friend made the whole thing up. Evidently this was not one of his more reliable friends.

I should have realized, perhaps, that an unknown acquaintance of an acquaintance in an IRC channel might be less reliable than a published journal article. Alas, belief is easier than disbelief; we believe instinctively, but disbelief requires a conscious effort.2

So instead, by dint of mighty straining, I forced my model of reality to explain an anomaly that never actually happened. And I knew how embarrassing this was. I knew that the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.

Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.

We are all weak, from time to time; the sad part is that I could have been stronger. I had all the information I needed to arrive at the correct answer, I even noticed the problem, and then I ignored it. My feeling of confusion was a Clue, and I threw my Clue away.

I should have paid more attention to that sensation of still feels a little forced. It’s one of the most important feelings a truthseeker can have, a part of your strength as a rationalist. It is a design flaw in human cognition that this sensation manifests as a quiet strain in the back of your mind, instead of a wailing alarm siren and a glowing neon sign reading:

Either Your Model Is False Or This Story Is Wrong.

1 And the hospital absorbs the costs, which are enormous, so hospitals are closing their emergency rooms . . . It makes you wonder what’s the point of having economists if we’re just going to ignore them.

2 From McCluskey (2007), “Truth Bias”: “[P]eople are more likely to correctly judge that a truthful statement is true than that a lie is false. This appears to be a fairly robust result that is not just a function of truth being the correct guess where the evidence is weak—it shows up in controlled experiments where subjects have good reason not to assume truth[.]” http://www.overcomingbias.com/2007/08/truth-bias.html .

And from Gilbert et al. (1993), “You Can’t Not Believe Everything You Read”: “Can people comprehend assertions without believing them? [...] Three experiments support the hypothesis that comprehension includes an initial belief in the information comprehended.”

123 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by anon2 · 2007-08-11T01:50:21.000Z · LW(p) · GW(p)

It's strange that it sounds like a rationalist is saying that he should have listened to his instincts. A true rationalist should be able to examine all the evidence without having to rely on feelings to make a judgment, or would be able to truly understand the source of his feelings, in which case it's more than just a feeling. The unfortunate thing is that people are more likely to remember the cases when they didn't listen to their feelings which ended up being correct in the end, than all the times when they were wrong.

The "quiet strain in the back of your mind" is what drives some people to always expect the worst to happen, and every so often they are right which reinforces their confidence in their intuitions more than their confidence diminishes each time they are wrong.

In some cases, it might be possible for someone to have a rational response to a stimulus only to think that it is intuition because they don't quite understand or aren't able to fully rationalize the source of the feeling. From my own experiences, it seems that some people don't make a hard enough effort to search for the source... they either don't seem to think that there is a rational source, or don't care to take the effort.... as long as they are able to ascertain what their feelings suggest they do, they really don't seem to care whether or not the source is rational or irrational.

A true rationalist would be able to determine the source and rationality of the feeling. The interesting question is if he fails to rationally explain the feeling, should he ignore the feeling, chalking it up to his weakness as a perfect rationalist.

Since we are all human and cannot be perfectly rational, shouldn't a rationalist decide that a seemingly irrational feeling is just that, irrational. Is it not more rational to believe that a seemingly irrational feeling is the result of our own imperfection as a human?

Replies from: MrPineapple, None, wizzwizz4
comment by MrPineapple · 2011-01-24T00:59:00.108Z · LW(p) · GW(p)

a rationalist should acknowledge their irrationality, to do otherwise would be to irrational.

comment by [deleted] · 2011-09-03T22:23:36.184Z · LW(p) · GW(p)How do you face this situation as a rationalist?
comment by wizzwizz4 · 2020-04-26T17:04:29.816Z · LW(p) · GW(p)

Sorry, since when does "quiet strain in the back of your mind" automatically translate to "irrational"? This particular quiet voice is usually _right_; surely that makes it rational?

Replies from: zenfarer
comment by zenfarer · 2023-08-30T07:44:39.722Z · LW(p) · GW(p)

To my mind, this question relates to the accuracy of intuitions and the problems that arises while relying on it.

In the original post, my take is that the "quiet strain in the back of your mind" refers to the observation that people whom's opinion you value in a chatroom are discarding your opinion which is based on a single "anec-data" ; which, for a rationalist in his right mind, taking a step back and on a good day, would automatically discard as the sole model through which reality ought to be interpreted.

While this answers your question, my broader take is that untrained intuition is just a mashup feeling of what feels right or wrong according to a situation, and feelings are not to be confounded with reality. Unless... in the rare occurrence that these feelings have been thoroughly trained to be right, and by that I mean conditioning of the mind through repetition to the point that, for example, a veteran mathematician would "feel" or "intuit" something is wrong with a mathematical proof with just a glance and without going through the details.

Yet there ought to be human limits about relying on such trained intuition and feeling, thus, by default, relying on them must be a last resort or a matter of physical survival - which is what intuition is better used for (to me) - rather than to be extrapolated as a proxy for rationality.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-08-11T01:53:38.000Z · LW(p) · GW(p)

Anon, see Why Truth?:

When people think of "emotion" and "rationality" as opposed, I suspect that they are really thinking of System 1 and System 2 - fast perceptual judgments versus slow deliberative judgments. Deliberative judgments aren't always true, and perceptual judgments aren't always false; so it is very important to distinguish that dichotomy from "rationality". Both systems can serve the goal of truth, or defeat it, according to how they are used.

comment by Chris_Hibbert · 2007-08-11T02:34:34.000Z · LW(p) · GW(p)

"I should have paid more attention to that sensation of still feels a little forced."

The force that you would have had to counter was the impetus to be polite. In order to boldly follow your models, you would have had to tell the person on the other end of the chat that you didn't believe his friend. You could have less boldly held your tongue, but that wouldn't have satisfied your drive to understand what was going on. Perhaps a compromise action would have been to point out the unlikelihood, (which you did: "they'd have hauled him off if there was the tiniest chance of serious trouble"), and ask for a report on the eventual outcome.

Given the constraints of politeness, I don't know how you can do better. If you were talking to people who knew you better, and understood your viewpoint on rationality, you might expect to be forgiven for giving your bald assessment of the unlikeliness of the report.

Replies from: bigjeff5, cidcilver
comment by bigjeff5 · 2011-01-28T06:21:55.614Z · LW(p) · GW(p)

Not necessarily.

You can assume the paramedics did not follow the proper procedure, and that his friend aught to go to the emergency room himself to verify that he is OK. People do make mistakes.

The paramedics are potentially unreliable as well, though given the litigious nature of our society I would fully expect the paramedics to be extremely reliable in taking people to the emergency room, which would still cast doubt on the friend.

Still, if you want to be polite, just say "if you are concerned, you should go to the emergency room anyway" and keep your doubts about the man's veracity to yourself. No doubt the truth would have come out at that point as well.

comment by cidcilver · 2015-12-24T21:10:22.794Z · LW(p) · GW(p)

I saw someone on FB reposting this post today.

Makes an interesting point about not doubting your own models in certain circumstances I guess, but the original post leaves out relevant issues of trust and pragmatism.

Sure people probably gullibly believe untrue stories more often than they should, but biases also often cause us to discount anecdotes that are actually representative of real, lived experiences (such as the subtle experiences of those who suffer from racism and sexism). - http://ntrsctn.com/science-tech/2015/12/tech-guys-allies/

Just because a bug is unusual or difficult to locally replicate/experience doesn't mean you should discount the bug reports.

Also (obviously) faith in even medical experts/institutions should be absolute.

Finally there's nothing wrong with offering someone good advice even if you think they may have lied to you/are trolling... there's still a chance they were not trolling, and arming them with good information might be good for them in the short term or long term.

Replies from: Jiro
comment by Jiro · 2015-12-25T01:08:13.924Z · LW(p) · GW(p)

That article is written as though "are you sure that was sexism" literally means "you had better prove it is sexism with 100% certainty, or I won't believe you".

That is not what it means. It's not a demand for 100% certainty, it's a demand for better evidence. You don't have to be treating the world like a computer in order to think that you should try to rule out innocent explanations before proclaiming someone guilty.

Also, while the author claims that the standard he quotes makes it impossible to prove sexism, his own standard has the opposite problem: according to it it's impossible to prove anyone innocent of sexism. People don't favor uncertainty over assumption because they're computer geeks; people favor uncertainty over assumption because there are such things as false positives, and they have enough of a cost that avoiding them is worthwhile.

Replies from: tlhonmey
comment by tlhonmey · 2020-12-11T18:05:33.551Z · LW(p) · GW(p)

Reminds me of a family dinner where the topic of the credit union my grandparents had started came up.

According to my grandmother, the state auditor was a horribly sexist fellow.  He came and audited their books every single month, telling everyone who would listen that it was because he "didn't think a woman could be a successful credit union manager."

This, of course, got my new-agey aunts and cousins all up-in-arms about how horrible it was that that kind of sexism was allowed back in the 60s and 70s.  They really wanted to make sure everyone knew they didn't approve, so the conversation dragged on and on...

And about the time everyone was all thoroughly riled up and angry from the stories of the mean, vindictive things this auditor had done because the credit union was run by a woman my grandfather decided to get in on the ruckus and told his story about the auditor...

Seems like the very first time the auditor had come through, the auditor spent several hours going over the books and couldn't make it all balance correctly.  He was all-fired sure this brand new credit union was up to something shady.  Finally, my grandfather (who was the credit union accountant) leaned over his shoulder and pointed out the rookie math mistake the auditor had been making... repeatedly... until an hour past closing time and "could we please go home now?"

The auditor was horribly embarrassed, and stormed out in a huff.  And then proceeded to come back every single month for over twenty years trying to catch them in a mistake somewhere.

I don't know if my cousins learned anything from that story.  My grandfather's a quiet fellow.  They might not even have heard his side of it.  But I sure did.  See, in the 60s and 70s, the auditor coming out and saying, "I'm harassing you because you humiliated me and I want revenge" would have been totally unacceptable and likely would have gotten him dismissed.  But saying it was because he didn't trust a female manager?  That was a lie, but it was a socially acceptable reason for doing what he wanted to do for personal reasons anyway.

Makes me wonder just how much historic racism and sexism was simply people looking for a socially acceptable excuse to be jerks.  And since I don't think people's overall level of desire to be spiteful has changed much, I wonder what the excuses are today now that the "traditional" ones are no longer acceptable.

comment by michael_vassar3 · 2007-08-11T03:31:03.000Z · LW(p) · GW(p)

In it's strongest form, not believing system 1 amounts to not believing perceptions, hence not believing in empiricism. This is possibly the oldest of philosophical mistakes, made by Plato, possibly Siddhartha, and probably others even earlier.

Replies from: MrPineapple
comment by MrPineapple · 2011-01-24T01:01:09.059Z · LW(p) · GW(p)

there are always the empirical observation of prior situations that really didnt match the appropriate system 1. to always believe that system 1 is infallible is perhaps contradictory of the system itself.

comment by Tony · 2007-08-11T06:16:35.000Z · LW(p) · GW(p)

Sounds like good old cognitive dissonance. Your mental model was not matching the information being presented.

That feeling of cognitive dissonance is a piece of information to be considered in arriving at your decision. If something doesn't feel right, usually either te model or the facts are wrong or incomplete.

T

comment by Psy-Kosh · 2007-09-30T20:33:43.000Z · LW(p) · GW(p)

"And this is where I fell down as a rationalist. I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming. And the Medical Establishment was always right. Every single time. I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack. So I said into the IRC channel, "Well, if the paramedics told your friend it was nothing, it must really be nothing - they'd have hauled him off if there was the tiniest chance of serious trouble.""

My own "hold on a second" detector is pinging mildly at that particular bit. Specifically, isn't there a touch of an observer selection effect there? If the docs had been wrong and you ended up dying as a result, you wouldn't have been around to make that deduction, so you're (Well, anyone is) effectively biased to retroactively observe outcomes in which if the doctor did say you're not in a life threatening situation, you're genuinely not?

Or am I way off here?

Replies from: MrPineapple
comment by MrPineapple · 2011-01-24T01:02:10.267Z · LW(p) · GW(p)

i seem to recall a link on another page entitled "hindsight bias".

Replies from: Psy-Kosh
comment by Psy-Kosh · 2011-01-24T22:05:52.248Z · LW(p) · GW(p)

Huh? What about hindsight bias?

Replies from: Desrtopa
comment by Desrtopa · 2011-01-24T22:14:35.669Z · LW(p) · GW(p)

If you read his other posts, I think you'll find he wasn't offering any sort of constructive contribution. He was probably laboring under some confusion, if not outright trolling.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-30T20:59:25.000Z · LW(p) · GW(p)

A valid point, Psy-Kosh, but I've seen this happen to a friend too. She was walking along the streets one night when a strange blur appeared across her vision, with bright floating objects. Then she was struck by a massive headache. I had her write down what the blur looked like, and she put down strange half-circles missing their left sides.

That point was when I really started to get worried, because it looked like lateral neglect - something that I'd heard a lot about, in my studies of neurology, as a symptom of lateralized brain damage from strokes.

The funny thing was, nobody in the medical profession seemed to think this was a problem. The medical advice line from her health insurance said it was a "yellow light" for which she should see a doctor in the next day or two. Yellow light?! With a stroke, you have to get the right medication within the first three hours to prevent permanent brain damage! So we went to the emergency room - reluctantly, because California has enormously overloaded emergency rooms - and the nurse who signed us in certainly didn't seem to think those symptoms were very alarming.

The thing is, of course, that non-doctors are legally prohibited from making diagnoses. So neither the nurse on the advice line, or the nurse who signed us into the emergency room, were allowed to say: "It's a migraine headache, you idiots."

You see, I'd heard the phrase "migraine headache", but I'd had no idea of what the symptoms of a "migraine headache" were. My studies in neurology told me about strokes and lateral brain damage, because those are very important to the study of functional neuroanatomy. So I knew about these super dangerous and rare killer events that seemed sort of like the symptoms we were encountering, but I didn't know about the common events that a doctor sees every day.

When you see symptoms, you think of lethal zebras, because those are what you read about in the newspapers. The doctor thinks of much less exciting horses. This is why the Medical Establishment has always been right, in my experience, every single time I'm alarmed and they're not.

But in answer to your question about selection effects, Psy-Kosh, I think I'd have noticed if my friend had actually had a stroke. In fact, it would have been much more likely to have been reported and repeated than the reverse case.

Replies from: Strilanc
comment by Strilanc · 2012-10-15T06:03:27.618Z · LW(p) · GW(p)

I had a similar experience with my girlfriend, except the symptoms were significantly more alarming. She was, among other things, unable to remember many common nouns. I would point and say 'What is that swinging room separator?" and she would be unable to figure out "door".

I was aware from the start that the symptoms might have been due to a migraine aura, having looked up the symptoms on Wikipedia, but was advised by 811 to take her to the hospital immediately. The symptoms were gone before we arrived. Five hours later (a strong hint that at least the triage people thought it wasn't an emergency), a doctor had diagnosed it as a silent migraine.

comment by Psy-Kosh · 2007-09-30T22:02:52.000Z · LW(p) · GW(p)

Okie, and yeah, I imagine you would have noticed.

Also, of course, docs that habitually misdiagnose would presumably be sued or worse to oblivion by friends and family of the deceased. I was just unsure about the actual strength of that one thing I mentioned.

comment by charon · 2008-01-05T16:41:35.000Z · LW(p) · GW(p)

I think one would be the closest to truth by replying: "I don't quite believe that your story is true, but if it is, you should... etc" because there is no way for you to surely know whether he was bluffing or not. You have to admit both cases are possible even if one of them is highly improbable.

comment by tel · 2009-10-29T06:02:16.397Z · LW(p) · GW(p)

Doesn't any model contain the possibility, however slight, of seeing the unexpected? Sure this didn't fit with your model perfectly — and as I read the story and placed myself in your supposed mental state while trying to understand the situation, I felt a great deal of similar surprise — but jumping to the conclusion that someone was just totally fabricating is something that deserves to be weighed against other explanations for this deviation from your model.

Your model states that pretty much under all circumstances an ambulance is going to pick up a patient. This is true to my knowledge as well, but what happens if the friend didn't report to you that once the ambulance he called it off and refused to be transported. Or perhaps at the same time his chest pains were being judged as not-so-severe the ambulance got another call in that a massive car pileup required their immediate presence.

Your strength as a rationalist must not be the rejection of things unlikely in your model but instead the act of providing appropriate levels of concern. Perhaps the best response is something along the lines of "Sounds like a pretty strange occurrence. Are you sure your friend told you everything?" Now we're starting to judge our level of confidence in the new information being valid.

Which is honestly a pretty difficult model to shake as well. So much of every bit of information you build your world with comes from other people that I think it pretty decent to trust with some amount of abandon.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-10-29T11:04:14.320Z · LW(p) · GW(p)

See antiprediction.

Replies from: tel
comment by tel · 2009-10-29T23:49:18.357Z · LW(p) · GW(p)

That's certainly sensible, and in But There's Still a Chance Eleiezer makes examples where this seems strong. In the above example, it depends a whole lot on how much belief you have in people (or, rather, lines of IRC chat).

I think then that your strength as a rationalist comes in balancing that uncertainty against some your prior trust in people. At which point, instead of predicting the negative, I'd seek more information.

Replies from: Dpar
comment by Dpar · 2010-01-14T18:41:42.077Z · LW(p) · GW(p)

The level of "trust" you have in a person should be inversely proportional to the sensationalism of the claim that he's making.

If a person tells you he was abducted by a UFO, you demand evidence.

If a person tells you that on the way to work he slipped and fell down, and you have no concrete reason to doubt the story in particular or the person in general, you take that at face value. It is a reasonable assumption that a perfect stranger in all likelihood will NOT be delusional or a compulsive liar.

DP

Replies from: tel
comment by tel · 2010-01-24T01:23:25.715Z · LW(p) · GW(p)

That makes sense if you're only evaluating complete strangers. In other words, your uncertainty about the population-inferred trustworthiness of a person is pretty high and so instead the mere (Occam Factor style) complexity of their statement is the overruling component of your decision.

In the stated case, this isn't a totally random stranger. I feel quite justified in having a less-than uninformative prior about trusting IRC ghosts. In this case, my rationally acquired prejudice overrules in inference about the truth of even somewhat ordinary tales.

Replies from: Dpar
comment by Dpar · 2010-05-11T06:36:54.576Z · LW(p) · GW(p)

The author did not mention anything about an exceptionally high percentage of liars in IRC relative to the general population (which would be quite relevant to his statement) therefore there's no reason to believe that such had been HIS experience in the past.

Given that, there is no reason for HIM to presume that the percentage of compulsive liars in IRC would different from the general population. YOUR experiences may, of course, be drastically different, but they are not the subject of discussion here.

DP

Replies from: MrPineapple
comment by MrPineapple · 2011-01-24T01:03:47.775Z · LW(p) · GW(p)

and there's always the prisoners dilemna to consider.

comment by Dpar · 2010-01-14T18:36:07.006Z · LW(p) · GW(p)

I don't see that you did anything at all irrational. You're talking to a complete stranger on the internet. He doesn't know you, and cannot have any possible interest in deceiving you. He tells you a fairly detailed story and asks for you advice. For him to make the whole thing up just for kicks is an example of highly irrational and fairly unlikely behavior.

Conversely, a person's panicking over chest pains and calling the ambulance is a comparatively frequent occurrence. Your having read somewhere something about ambulance policies does not amount to having concrete, irrefutable knowledge that an ambulance crew cannot make an on-site determination that there's no need to take a person to the hospital. To a person without extensive medical knowledge there is nothing particularly unlikely about the story you were told.

Therefore, the situation is this -- you are told by a complete stranger that has no reason to lie to you a perfectly believable story. You have no concrete reason ("read something somewhere" does not qualify) to doubt either the story or the man's sanity. Thus there is nothing illogical about taking the story at face value. You did the perfectly rational thing.

Since there was no irrationality in your initial behavior, the conclusions that you arrive at further in your post are unfounded.

DP

Replies from: NancyLebovitz, Sniffnoy, None
comment by NancyLebovitz · 2010-04-04T09:26:49.674Z · LW(p) · GW(p)

You're talking to a complete stranger on the internet. He doesn't know you, and cannot have any possible interest in deceiving you.

There's plenty of evidence that some people (a smallish minority, I think) will deceive strangers for the fun of it.

Replies from: Dpar, SilasBarta
comment by Dpar · 2010-05-11T06:20:12.552Z · LW(p) · GW(p)

Which, as I said later on in the same paragraph, is irrational and unlikely behavior. Therefore, when lacking any factual evidence, the reasonable presumption is that that's not the case.

DP

Replies from: RobinZ
comment by RobinZ · 2010-05-11T15:14:00.473Z · LW(p) · GW(p)

I think many of us have actually encountered liars on the Internet. I'm not sure what you mean when you say "lacking any factual evidence".

Replies from: Dpar
comment by Dpar · 2010-06-07T11:07:01.943Z · LW(p) · GW(p)

I presume that you have encountered liars in the real world as well. Do you, on that basis, habitually assume that a random stranger engaging in casual conversation with you is a liar?

My point is that pathological liars are a small minority. So if you're dealing with a person that you know absolutely nothing about, and who does not have any conceivable reason to lie to you, there is nothing unreasonable in assuming that he's telling you the truth, unless you have factual evidence (i.e. you have accurate, verifiable knowledge of ambulance policies) that contradicts what he's saying.

DP

Replies from: RobinZ, persephonehazard
comment by RobinZ · 2010-06-07T12:06:04.746Z · LW(p) · GW(p)

I think at this point the questions have become (a) "how many bits of evidence does it take to raise 'someone is lying' to prominence as a hypothesis?" and (b) "how many bits of evidence can I assign to 'someone is lying' after evaluating the probability of this story based on what I know?"

I believe your argument is that a > b (specifically, that a is large and b is small), where the post asserts that a < b. I'm not going to say that's unreasonable, given that all we know is what Eliezer Yudkowsky wrote, but often actual experience has much more detail than any feasible summary - I'm willing to grant him the benefit of the doubt, given that his tiny note of discord got the right answer in this instance.

Replies from: Dpar
comment by Dpar · 2010-08-09T17:41:54.384Z · LW(p) · GW(p)

My argument is what I stated, nothing more. Namely that there is nothing unreasonable about assuming that a perfect stranger that you're having a casual conversation with is not trying to deceive you. I already laid out my reasoning for it. I'm not sure what more I can add.

DP

comment by persephonehazard · 2011-06-07T22:33:45.844Z · LW(p) · GW(p)

"Do you, on that basis, habitually assume that a random stranger engaging in casual conversation with you is a liar?"

Yes. Absolutely. Almost /everyone/ lies to complete strangers sometimes. Who among us has never given an enhanced and glamourfied story about who they are to a stranger they struck up a conversation with on a train?

Never? Really? Not even /once/?

Replies from: Alicorn
comment by Alicorn · 2011-06-07T22:43:55.246Z · LW(p) · GW(p)

Yes. Absolutely. Almost /everyone/ lies to complete strangers sometimes. Who among us has never given an enhanced and glamourfied story about who they are to a stranger they struck up a conversation with on a train?

Never? Really? Not even /once/?

If everyone regularly talked to strangers on trains, and exactly once lied to such a stranger, it would still be pretty safe to assume that any given train-stranger is being honest with you.

Replies from: persephonehazard
comment by persephonehazard · 2011-06-08T02:55:40.851Z · LW(p) · GW(p)

Actually, yes, you're entirely right.

In conversations I've had about this with friends - good grief, there's a giant flashing anecdata alert if ever I did see one, but it's the best we've got to go off here - I would suspect that people do it often enough that it's a reasonable thing to consider in a situation like the one being discussed here, though.

Not that I think it's a bad thing that the person in question didn't, mind you. It would be a very easy option not to consider.

comment by SilasBarta · 2011-06-07T23:02:23.352Z · LW(p) · GW(p)

Yes, they deceive strangers in particular ways that have the potiential to bring enjoyment to the deceiver. The story here doesn't strike me as one of those cases -- would it bring the deceiver any mirth to hear people's medical advice about chest pains? Probably not. That would be more likely if the story were something like, "um, I've got these strange warts on my..."

(And I say this as someone who's trolled IRC with similar requests for advice.)

comment by Sniffnoy · 2010-04-04T10:17:29.858Z · LW(p) · GW(p)

("read something somewhere" does not qualify)

Wait, why not?

Replies from: Dpar
comment by Dpar · 2010-05-11T06:22:01.535Z · LW(p) · GW(p)

I read somewhere that if spin about and click my heels 3 times I will be transported to the land of Oz. Does that qualify as a concrete reason to believe that such a land does indeed exist?

DP

Replies from: thomblake
comment by thomblake · 2010-08-09T17:49:31.820Z · LW(p) · GW(p)

I read somewhere that if spin about and click my heels 3 times I will be transported to the land of Oz. Does that qualify as a concrete reason to believe that such a land does indeed exist?

That indeed serves as evidence for that fact, though we have much stronger evidence to the contrary.

N.B. You do not need to sign your comments; your username appears above every one.

Replies from: wedrifid, Dpar
comment by wedrifid · 2010-08-09T17:54:49.486Z · LW(p) · GW(p)

That indeed serves as evidence for that fact, though we have much stronger evidence to the contrary.

And not just because clicking the heels three times is more canonically (and more often) said to be way to return to Kansas from Oz. and not to Oz.

comment by Dpar · 2010-08-09T18:44:57.699Z · LW(p) · GW(p)

So the fact that something was written somewhere is sufficient to meet your criteria for considering it evidence? I take it you have actually tried clicking your heels to check whether or not you would be teleported to Oz then?

Also, does my signing my comments offend you?

DP

Replies from: Vladimir_Nesov, wedrifid
comment by Vladimir_Nesov · 2010-08-09T18:48:38.532Z · LW(p) · GW(p)

Also, does my signing my comments offend you?

It hurts aesthetically by disrupting uniformity of standard style.

Replies from: Dpar
comment by Dpar · 2010-08-09T19:13:23.405Z · LW(p) · GW(p)

Fair enough. It's a habit of mine that I'm not married to. If members of this board take issue with it, I can stop.

comment by wedrifid · 2010-08-09T19:06:48.512Z · LW(p) · GW(p)

So the fact that something was written somewhere is sufficient to meet your criteria for considering it evidence?

Yes. It's really sucky evidence.

I take it you have actually tried clicking your heels to check whether or not you would be teleported to Oz then?

This doesn't remotely follow and is far weaker evidence than other available sources. For a start, everyone knows that you get to Oz with tornadoes and concussions.

Also, does my signing my comments offend you?

It makes you look like an outsider who isn't able to follow simple social conventions and may have a tendency towards obstinacy. (Since you asked...)

Replies from: Dpar
comment by Dpar · 2010-08-09T19:19:32.130Z · LW(p) · GW(p)

"This doesn't remotely follow and is far weaker evidence than other available sources. For a start, everyone knows that you get to Oz with tornadoes and concussions."

Let's not get bogged down in the specific procedure of getting to Oz. My point was that if you truly adapt merely seeing something written somewhere as your standard for evidence, you commit yourself to analyzing and weighing the merits of EVERYTHING you read about EVERYWHERE. Do you mean to tell that when you read a fairy tale you truly consider whether or not what's written there is true? That you don't just dismiss it offhand without giving it a second thought?

"It makes you look like an outsider who isn't able to follow simple social conventions and may have a tendency towards obstinacy. (Since you asked...)"

Like I said above to Vladimir, it's not a big deal, but you're reading quite a bit into a simple habit.

Replies from: Vladimir_Nesov, jimrandomh, wedrifid
comment by Vladimir_Nesov · 2010-08-09T19:29:03.571Z · LW(p) · GW(p)

The fact that something is really written is true; whether it implies that the written statements themselves are true is a separate theoretical question. Yes, ideally you'd want to take into account everything you observe in order to form an accurate idea of future expected events (observable or not). Of course, it's not quite possible, but not for the want of motivation.

Replies from: Dpar
comment by Dpar · 2010-08-09T19:36:30.073Z · LW(p) · GW(p)

Well I didn't think I needed to clarify that I'm not questioning whether or not something that's written is really written. Of course, I'm questioning the truthfulness of the actual statement.

Or not so much it's truthfulness, but rather whether or not it can be considered evidence. Though I realize that you take issue with arguing over word definitions, to me the word evidence has certain meaning that goes beyond every random written sentence, whisper or rumor that you encounter.

Replies from: Vladimir_Nesov, Cyan
comment by Vladimir_Nesov · 2010-08-09T19:39:17.258Z · LW(p) · GW(p)

The fact that something is written, or not written, is evidence about the way world is, and hence to some extent evidence about any hypothesis about the world. Whether it's strong evidence about a given hypothesis is a different question, and whether the statement written/not written is correct is yet another question.

(See also the links from this page.)

comment by Cyan · 2010-08-09T19:43:10.030Z · LW(p) · GW(p)

Though I realize that you take issue with arguing over word definitions, to me the word evidence has certain meaning that goes beyond every random written sentence, whisper or rumor that you encounter.

Around these parts, a claim that B is evidence for A is a taken to be equivalent to claiming that B is more probable if A is true than if not-A is true. Something can be negligible evidence without being strictly zero evidence, as in your example of a fairy story.

comment by jimrandomh · 2010-08-09T19:39:10.034Z · LW(p) · GW(p)

Let's not get bogged down in the specific procedure of getting to Oz. My point was that if you truly adapt merely seeing something written somewhere as your standard for evidence, you commit yourself to analyzing and weighing the merits of EVERYTHING you read about EVERYWHERE.

No, you can acknowledge that something is evidence while also believing that it's arbitrarily weak. Let's not confuse the practical question of how strong evidence has to be before it becomes worth the effort to use it ("standard of evidence") with the epistemic question of what things are evidence at all. Something being written down, even in a fairy tale, is evidence for its truth; it's just many orders of magnitude short of the evidential strength necessary for us to consider it likely.

Replies from: Dpar
comment by Dpar · 2010-08-09T19:53:50.288Z · LW(p) · GW(p)

Vladimir, Cyan, and jimrandomh, since you essentially said the same thing, consider this reply to be addressed to all three of you.

Answer me honestly, when reading a fairy tale, do you really stop to consider what's written there, qualify its worth as evidence, and compare it to everything else you know that might contradict it, before making the decision that the probability of the fairy tale being true is extremely low? Do you really not just dismiss it offhand as not true without a second thought?

Replies from: Cyan, Oligopsony, Vladimir_Nesov
comment by Cyan · 2010-08-09T19:58:18.940Z · LW(p) · GW(p)

When I pick up a work of fiction, I do not spend time assessing its veracity. If I read a book of equally fantastic claims which purports to be true, I do spend a little time. You might want to peruse bounded rationality for an overview.

Replies from: Dpar
comment by Dpar · 2010-08-09T20:02:14.628Z · LW(p) · GW(p)

So you would then agree that merely the fact that something is written SOMEWHERE, does not automatically qualify it as evidence?

(Incidentally that is my original point, which in spite of seeming as common sense as common sense can be, has attracted a surprising amount of disagreement.)

Replies from: Cyan, Unknowns
comment by Cyan · 2010-08-09T20:07:04.427Z · LW(p) · GW(p)

So you would then agree that merely the fact that something is written SOMEWHERE, does not automatically qualify it as evidence?

You have to specify what it purports to be evidence of before I can give you an answer that isn't a tangent.

Edited to add: Maybe I can do better than the above sentence. I affirm that the existence of this book is negligible but not strictly zero evidence for the claims detailed therein.

Replies from: Dpar
comment by Dpar · 2010-08-09T20:52:12.869Z · LW(p) · GW(p)

At this point I'm not sure what we can do other than agree to disagree. I do not consider a random article from an obscure source on the internet to be evidence of anything.

comment by Unknowns · 2010-08-09T20:12:18.766Z · LW(p) · GW(p)

There may be sense in which this is common sense, but you were purposely using it tendentiously, which is why people responded in the technical way that they did.

Eliezer said that he read something "somewhere", obviously intending to say that he read it somewhere that he considered trustworthy at the time, not in a fairy tale.

Replies from: Dpar
comment by Dpar · 2010-08-09T20:56:39.971Z · LW(p) · GW(p)

Well, what can I say? I simply don't consider the vague recollection of reading something somewhere credible evidence of anything, and I stand by that. However, the amount of people that took issue with this statement did open my eyes to the fact that the definition of word "evidence" is not as clear cut as I thought it to be. Not sure if there's any way to resolve this difference of opinion though.

Replies from: WrongBot, thomblake
comment by WrongBot · 2010-08-09T21:00:34.431Z · LW(p) · GW(p)

The easy solution is to stop arguing about the definition of evidence. This community uses it to mean one thing, you're using it to mean something else, and any sort of conflict goes away as soon as people make clear which definition they're using. Since this community already has an accepted definition, you would be safe in assuming that that definition is what other posters here have in mind when they use the word "evidence". By the same token, you should probably find a more precise way to refer to the definition of evidence that you are using in order to avoid being misinterpreted.

Replies from: jimrandomh, Dpar
comment by jimrandomh · 2010-08-09T21:13:03.387Z · LW(p) · GW(p)

Sticking an adjective in front of the word evidence seems to work. "Evidence" includes things that give you 10^-15 bits of information; on the other hand "good evidence", "usable evidence" and "credible evidence" all imply that the strength of the evidence is at least not exponentially tiny.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-09T22:01:28.133Z · LW(p) · GW(p)

I thought that "evidence", unmodified, would mean non-trivial evidence; otherwise, everything has to count as evidence because it will have some connection to the hypothesis, however weak. To specify a kind of evidence that includes the 1e-15 bit case, I think you would need to say "weak evidence" or "very weak evidence".

But I'm not the authority on this: How do others here interpret the term "evidence" (in the Bayesian sense) when it's unmodified?

Replies from: thomblake, jimrandomh, JoshuaZ
comment by thomblake · 2010-08-09T22:21:14.586Z · LW(p) · GW(p)

I'm sympathetic to both views.

I have encountered a number of disputes that revolve around using these two different senses of the word, and am nonetheless blindsided by them consistently.

I try to always specify the strength of evidence in some sense when using the word. I think when I do use it unmodified I tend to use it in the technical sense (including even weak evidence).

It would be odd if 'evidence' excluded weak evidence, since then 'weak evidence' would be a contradiction in terms, or you could see people arguing things like "When I said 'weak evidence' I didn't mean the 1e-15 bit case, since that's not evidence at all!"

comment by jimrandomh · 2010-08-09T22:32:21.674Z · LW(p) · GW(p)

Hmm. Maybe the strength of the evidence isn't the right thing to use, but rather the confidence with which we know the sign of the correlation.

comment by JoshuaZ · 2010-08-09T23:16:14.409Z · LW(p) · GW(p)

I would if I were talking to a Bayesian, interpret it as meaning something where a "B is evidence for A" if rough calculation shows that P(A|B) > P(A). I don't generally expect rationalists to even mention individual data points unless P(A|B)/P(A) is large, but if someone else gave the data as an example, then I wouldn't expect it to be necessarily large if a Bayesian referred to as evidence. So for example, I could see a Bayesian asserting that the writing of the Bible is evidence for a global flood some 5000 years ago, but I'd be deeply surprised if a Bayesian brought this up in almost any context because the evidence is so weak (in this case P(A|B)>P(A) but P(A|B)/P(A) is very close to 1).

Replies from: SilasBarta
comment by SilasBarta · 2010-08-10T20:56:56.934Z · LW(p) · GW(p)

I agree, this sounds exactly right to me. Unfortunately, I remember that in a lot of Robin Hanson's earlier OvercomingBias posts, my reaction to them would be, "Yes, B is technically evidence in favor of A, but it's extremely weak -- why even mention it?" For example, Suicide Rock.

(I think I have a picture of one of those somewhere...)

comment by Dpar · 2010-08-09T21:18:58.764Z · LW(p) · GW(p)

That's fair enough. However, judging by what I've read, this community's definition of evidence seems to constitute just about anything ever written about anything. How would you then differentiate evidence, from rumor, hearsay, speculation, etc.?

Replies from: WrongBot, Vladimir_Nesov
comment by WrongBot · 2010-08-09T21:21:50.997Z · LW(p) · GW(p)

The wiki should be a good starting point for answering this question. What is Evidence? may also be helpful.

Short version: rumor, hearsay, and speculation are evidence, albeit of a very weak variety.

Replies from: Dpar
comment by Dpar · 2010-08-09T21:35:21.267Z · LW(p) · GW(p)

Well that clarifies things quite a bit. I find this definition of evidence surprising, especially in this community, but very interesting. I'll have to sleep on it. Thank you for the references.

comment by Vladimir_Nesov · 2010-08-09T21:24:09.538Z · LW(p) · GW(p)

Rumor, hearsay, etc. falls under our definition of evidence, just weak evidence, or probably very indirect (for example, if there is a rumor that A, it might constitute evidence against A being true, given other things you know).

comment by thomblake · 2010-08-09T21:34:04.610Z · LW(p) · GW(p)

credible evidence

As noted by jimrandomh, saying 'credible evidence' does make an effort to differentiate between different sorts of evidence. If your claim was simply that reading something was not evidence, then you should not have to qualify the word when you use it now. I imagine for those of us who seem to be disagreeing with you, we would agree that that does not constitute 'credible evidence' for some values of 'credible'.

Replies from: Dpar
comment by Dpar · 2010-08-09T21:48:21.714Z · LW(p) · GW(p)

That's really clever. I always thought that "credible evidence" was a bit redundant actually. I just used as a figure of speech without thinking about, but according to my definition of evidence that it has to be credible is pretty much implicit. It has been made abundantly clear to me, however, that this community's definition differs substantially, so that's the definition I will use when posting here going forward.

comment by Oligopsony · 2010-08-09T20:00:05.673Z · LW(p) · GW(p)

No, but only because that would be cognitively burdensome. We're boundedly rational.

comment by Vladimir_Nesov · 2010-08-09T20:11:46.674Z · LW(p) · GW(p)

Immediate observation is only that something is written. That it's also true is a theoretical hypothesis about that immediate observation. That what you are reading is a fairy tale is evidence against the things written there being true, so the theory that what's written in a fairy tale is true is weak. On the other hand, the fact that you observe the words of a given fairy tale is strong evidence that the person (author) whose name is printed on the cover really existed.

Replies from: Dpar
comment by Dpar · 2010-08-09T20:47:35.618Z · LW(p) · GW(p)

All that is indisputably true. But you didn't really answer my question on whether or not you give enough consideration to what's written in a fairy tale (not whether or not it's written, not who it's written by, but the actual claims made therein) to truly consider it evidence to be incorporated into or excluded from your model of the world.

Replies from: Vladimir_Nesov, wedrifid
comment by Vladimir_Nesov · 2010-08-09T21:09:28.786Z · LW(p) · GW(p)

Evidence isn't usually something you "include" in your model of the world, it's something you use to categorize models of the world into correct and incorrect ones. Evidence is usually something not interesting in itself, but interesting instrumentally because of the things it's connected to (caused by).

comment by wedrifid · 2010-08-10T08:21:59.855Z · LW(p) · GW(p)

But you didn't really answer my question on whether or not you give enough consideration to what's written in a fairy tale (not whether or not it's written, not who it's written by, but the actual claims made therein) to truly consider it evidence to be incorporated into or excluded from your model of the world.

That is because it is a bad question and one of a form for which you have already received responses.

comment by wedrifid · 2010-08-10T08:18:14.747Z · LW(p) · GW(p)

Do you mean to tell that when you read a fairy tale you truly consider whether or not what's written there is true?

This doesn't remotely follow either. Go and research the concept of evidence more.

Like I said above to Vladimir, it's not a big deal, but you're reading quite a bit into a simple habit.

I care little about your signature. I merely describe the social behaviour of humans. What actually does annoy me is if people refuse to use markdown syntax for quotes once they have been prompted. Click the help link below the comment box - consider yourself prompted.

Replies from: Dpar
comment by Dpar · 2010-08-10T12:35:58.636Z · LW(p) · GW(p)

Duly noted. God forbid I do something that annoys you. Won't be able to live with myself.

Replies from: ciphergoth, JoshuaZ
comment by Paul Crowley (ciphergoth) · 2010-08-10T12:41:33.189Z · LW(p) · GW(p)

As always, I recommend against sarcasm, which can hide errors in reasoning that would be more obvious when you speak straightforwardly.

Replies from: Dpar
comment by Dpar · 2010-08-10T13:16:17.425Z · LW(p) · GW(p)

It was a comment on wedrifid's implicit assumption that I should care about what annoys him and bizarre expectation that I would adjust my behavior because I was "prompted" (not asked politely mind you) by him. Not sure what part of that is not obvious to you.

comment by JoshuaZ · 2010-08-10T13:25:13.218Z · LW(p) · GW(p)

Generally, when some minor formatting issue annoys a long-standing member of an internet community it is a good idea to listen to what they have to say. Many internet fora have standard rules about formatting and style that aren't explicitly expressed. These rules are convenient because they make reading easier for everyone. There's also a status/signaling aspect in that not using standard formatting signals someone is an outsider. Refusing to adopt standard format and styling signals an implicit lack of identification with a community. Even if one doesn't identify with a group, the effort it takes to conform to formatting norms is generally small enough that the overall gain is positive.

Replies from: Dpar
comment by Dpar · 2010-08-11T11:55:52.675Z · LW(p) · GW(p)

You're absolutely right. I have no problem using indentation for quotes, as a matter of fact I was wondering how to do that, it's his condescending tone that I took issue with. In retrospect though, I should have just ignored it, but let my temper get the best of me. I'll try to keep counter-productive comments to a minimum in the future.

Replies from: RobinZ
comment by RobinZ · 2010-08-11T22:08:59.500Z · LW(p) · GW(p)

Indentation happens by putting a greater-than sign at the beginning of the line. Thus:

> The quick brown fox jumps over the lazy dog.

becomes

The quick brown fox jumps over the lazy dog.

comment by [deleted] · 2010-08-09T19:02:14.247Z · LW(p) · GW(p)

I'm not sure of the particulars of your situation, but I personally encounter people lying on the internet orders of magnitude more times than I do people having chest pains.

comment by xamdam · 2010-02-26T21:19:58.386Z · LW(p) · GW(p)

An alternative explanation? You put your energy into solving a practical problem with a large downside (minimizing the loss function in nerdese). Yes, to be perfectly rational you should have said: "the guy is probably lying, but if he is not then...".

comment by Amanojack · 2010-03-14T03:38:26.700Z · LW(p) · GW(p)

It is a design flaw in human cognition that this sensation manifests as a quiet strain in the back of your mind, instead of a wailing alarm siren and a glowing neon sign reading "EITHER YOUR MODEL IS FALSE OR THIS STORY IS WRONG."

I wouldn't call it a flaw; blaring alarms can be a nuisance. Ideally you could adjust the sensitivity settings . . . hence the popularity of alcohol.

comment by Perplexed · 2010-07-23T01:19:30.239Z · LW(p) · GW(p)

Thank you, Eliezer. Now I know how to dissolve Newcomb type problems. (http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/)

I simply recite, "I just do not believe what you have told me about this intergalactic superintelligence Omega".

And of course, since I do not believe, the hypothetical questions asked by Newcomb problem enthusiasts become beneath my notice; my forming a belief about how to act rationally in this contrary-to-fact hypothetical situation cannot pay the rent.

Replies from: Ezekiel
comment by Ezekiel · 2012-01-09T00:32:33.068Z · LW(p) · GW(p)

Fair enough (upvoted); but I'm pretty sure Parfit's Hitchhiker is analogous to Newcomb's Problem, and that's an absolutely possible real-world scenario. Eliezer presents it in chapter 7 of his TDT document.

comment by zero_call · 2010-08-17T17:49:34.769Z · LW(p) · GW(p)

This sort of brings to my mind Pirsig's discussions about problem solving in ZATAOMM. You get that feeling of confusion when you are looking at a new problem, but that feeling is actually a really natural, important part of the process. I think the strangest thing to me is that this feeling tends to occur in a kind of painful way -- there is some stress associated with the confusion. But as you say, and as Pirsig says, that stress is really a positive indication of the maturation of an understanding.

comment by JohnDavidBustard · 2010-09-01T12:17:44.421Z · LW(p) · GW(p)

I'm not sure that listening to ones intuitions is enough to cause accurate model changes. Perhaps it is not rational to hold a single model in your head, as your information is incomplete. Instead one can consciously examine the situation from multiple perspectives, in this way the nicer (simpler, more consistent, whatever your metric is) model response can be applied. Alternatively you could legitimately assume that all the models you hold have merit and produce a response that balances their outcomes e.g. if your model of the medical profession is wrong and they die from your advice it is much worse than the unnecessary calling of the ambulance (letting the medical profession address the balance of resources). This would lead a rational person to simultaneously believe many contradictory perspectives and act as if they were all potentially true. Does anyone know of any theory in this area? the modelling of models (and efficiently resolving multiple models) would be very useful in AI.

comment by Summerspeaker · 2010-11-07T01:15:36.522Z · LW(p) · GW(p)

Considering that medical errors apparently kill more people than car accidents each year in the United States, I suspect the establishment is not in fact infallible.

Replies from: tgb
comment by tgb · 2013-04-04T02:53:04.099Z · LW(p) · GW(p)

Citation needed? I know I'm coming to this rather late, but a quick check of the 2010 CDC report on deaths in the US gives "Complications of medical and surgical care" as causing 2,490 deaths whereas transport accidents causing 37,961 deaths (35,332 of which were classified a 'motor vehicle deaths'). The only other thing I can see that might be medical errors put under a different heading is "Accidental poisoning and exposure to noxious substances" at 33,041 which combines to still fewer deaths than transport accidents even without removing those poisonings which are not medical errors. (This poisoning category appears to have a lot of recreational drug overdoses judging by the way it sharply increases in the 15-24 age group then drops off after 54 whereas time-spent-in-hospital is presumably increasing with age.)

On the other hand, a 2012 New York Times Op-Ed claims 98,000 deaths from medical errors a year. This number is so much larger than what the CDC reports that I must be misreading something. That would be about 1 in 20 people who die in the US die due to medical error. Original source from 1999). Actually checking that source, 98,000 deaths/year is the upper bound number given (lower bound of 44,000 deaths/year). The report also recommends a 50% reduction in these deaths within 5 years (so by 2004) - and Wikipedia mentions a 2006 study claiming that they successfully preventing 120,000 deaths in an 18 month time period but I can't find this study. A 2001 followup here appears to focus on suggestions for improvements rather than on giving new data to our question. 3 minutes on Google Scholar didn't turn up any recent estimates. This entire sub-field appears to rely very heavily upon that one source - at least in the US.

Also of interest is "Actual Causes of Death in the US" which classifies deaths by 'mistake made' (so to speak) - the top killer being tobacco use, then poor diet/low exercise, alcohol, microbial agents, toxic agents, car accidents, firearms, sexual behaviors, and illicit drug use. Medical errors didn't show high up on this list, despite it being the only source in the Wikipedia article on the original article.

Edit: also some places that cite the 1999 study accuse the CDC of not reporting these deaths as their own category. This appears to have changed given the category I reported above. The fact that there has been substantial uproar about medical error since the 1999 article and a corresponding increase in funding for studying it makes me unsurprised that the CDC would start reporting.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-04-04T03:49:45.813Z · LW(p) · GW(p)

If a doctor makes a mistake treating a patient from a vehicle accident, what heading does it get reported under?

(I ask the question in earnest, to anybody who might know the answer - because depending on what the answer is, it could explain the discrepancy.)

comment by gwern · 2011-04-22T14:12:31.555Z · LW(p) · GW(p)

From TvTropes:

"According to legend, one night the students of Baron Cuvier (one of the founders of modern paleontology and comparative anatomy) decided to play a trick on their instructor. They fashioned a medley of skins, skulls and other animal parts (including the head and legs of a deer) into a credibly monstrous costume. One brave fellow then donned the chimeric assemblage, crept into the Baron's bedroom when he was asleep and growled "Cuvier, wake up! I am going to eat you!" Cuvier woke up, took one look at the deer parts that formed part of the costume and sniffed "Impossible! You have horns and hooves!" (one would think "what sort of animals have horns and hooves" is common knowledge).

More likely he was saying "Impossible! You have horns and hooves (and are therefore not not a predator.)" The prank is more commonly reported as: "Cuvier, wake up! I am the Devil! I am going to eat you!" His response was "Divided hoof; graminivorous! It cannot be done." Apparently Satan is vegan. Don't comment that some deer have been seen eating meat or entrails, I occasionally grab the last slice of my bud's pizza but that doesn't classify me as a scavenger."

comment by [deleted] · 2011-09-03T22:28:50.150Z · LW(p) · GW(p)

How do you face this situation as a rationalist?

Replies from: lessdazed, Vaniver
comment by lessdazed · 2011-09-03T22:49:51.232Z · LW(p) · GW(p)

I think more context is necessary. Sorry.

comment by Vaniver · 2011-09-03T22:52:33.987Z · LW(p) · GW(p)

I believe the evidence is that the initial urge of A is more credible than the rationalization of B. That is, when students change answers on multiple choice tests, they are more likely to turn a right answer to a wrong answer than a wrong answer to a right answer. (I don't know if that generalizes to a true-false setting.)

Replies from: Rixie
comment by Rixie · 2013-04-04T05:56:59.616Z · LW(p) · GW(p)

It matters why "B sounds more plausible to your mind." If it's because you remembered a new fact, or if you reworked the problem and came out with B, change the answer (after checking that your work was correct and everything.) The many multiple choice tests are written so that there is one right answer, one wrong answer, and two plausible-sounding answers, so you shouldn't change an answer just because B is starting to sound plausible.

Replies from: Vaniver
comment by Vaniver · 2013-04-04T23:13:06.803Z · LW(p) · GW(p)

There are two modes of reasoning that are useful that I'd like to briefly discuss: inside view, and outside view.

Inside view uses models with small reach / high specificity. Outside view uses models with large reach / high generality. Inside view arguments are typically easier to articulate, and thus often more convincing, but there are often many reasons to prefer outside view arguments. (Generally speaking, there are classes of decisions where inside view estimates are likely to be systematically biased, and so using the outside view is better.)

When wondering whether to switch an answer, the inside view recommends estimating which answer is better. The outside view recommends looking at the situation you're in- "when people have switched answers in the past, has it generally helped or hurt?".

There are times when switching leads to the better result. But the trouble is that you need to know that ahead of time- and so, as you suggest, there may be reasons to switch that you can identify as strong reasons. But the decision whether to apply the inside or outside view (or whether you collect enough data to increase the specificity of your outside view approach) is itself a decision you have to make correctly, which you probably want to use the outside view to track, rather than just trusting your internal assessment at the time.

comment by a_mshri · 2011-09-20T22:58:10.063Z · LW(p) · GW(p)

I feel really uncomfortable with this idea: "EITHER YOUR MODEL IS FALSE OR THIS STORY IS WRONG."

I think this statement suffers from the same limitations of propositional logic; consequently, it is not applicable to many real life situations.

Most of the times, our model contains rules of this type (at least if we are rationalists): Event A occurs in situation B with probability C, where C is not 0 or 1. Also, life experiences teach us that we should update the probabilities in our model over time. So beside the uncertainty caused by the probability C, there is also uncertainty resulted from our degree of belief in the correctness of the rule itself. The situation becomes more complicated when the problem is cost sensitive.

I got your point (I hope so) and I'm definitely not trying to say "IT IS WORNG" but I think it is true to some degree.

comment by [deleted] · 2012-01-10T08:39:48.082Z · LW(p) · GW(p)

This post frustrated me for a while, because it seems right but not helpful. Saying to myself, "I should be confused by fiction" doesn't influence my present decision.

First concertize. Let's say I have a high level world model. A few of them perhaps, to reduce the chance that one bad example results in a bad principle.

"My shower produces hot water in the morning." "I have fresh milk to last the next two days." "The roads are no longer slippery."

What do these models exclude? "The water will be cold", "the milk will be spoiled", "I'll see someone sliding at an intersection" are easy ones. Then there are weirder ones like "I don't even own a shower", "Someone drank all my milk in the middle of the night", and "the roads are closed off due to an earthquake".

I could say, "My model as stated technically disallows all these things, so if I see any, I should have a huge update", but that's unrealistic. The use of "easy" and "weird" implicitly show that I'm already thinking about hypotheses not as strictly allowing and disallowing, but as resulting in greater and lesser probabilistic gains/hits to my confidence.

Even if I do give up entirely on "I have fresh milk", I usually replace it with something that is consistent with the old reasoning (not just the old observations). Perhaps I reason "The milk should have been fresh but spoiled because of a temporary power outage last night". That's actually a bad example because it's not something I'd jump to if I didn't have other observation indicating a power outage. Let's try again. "The milk should have been fresh, but oh dang, it wasn't." Yes, that looks like something I'd think. What about the others? My first explanations would probably be "The roads are a little slippery some places" and "The water heater is acting up".

So what did we just see in this totally fictional but mildly plausible-sounding anecdote? Sometimes a failed hypothesis --becomes-> failed hypothesis + some noise. Other times it's like the water heater explanation which look pretty different. Let's think about the first type. Is this small model-distance update heuristic justified? The new model clearly gives more probability mass to our actual observations, but that's the representative heuristic, totally insufficient to judge whether the theory is acceptable. For that we look to Bayes:

P(H|E) = P(H) * P(E|H) / P(E)

P(E) will be the same for all hypotheses we consider, so just ignore that. P(E|H) is pretty high, since we added noise to make sure the hypothesis would predict evidence. What about P(H)? How do I practically compare the prior for different hypotheses? How do I know when adding noise to my model is good enough vs. when I need to search for new hypotheses?

Let's think of six different methods to guess whether our new hypothesis will be good enough.

  1. Outside view. Think of five times you've been confused in the past four months when dealing with spoiled foods you purchased. If you can't, you're well calibrated enough in the spoiled food domain and the hypothesis is fine.
  2. Solomonoff induction says that complexity penalizes the prior probability of hypotheses. Let's try some dirty things that look like that. Count the words in the hypothesis. Counts the nouns and verbs. Count the number of conjunctions and subtract the number of disjunctions. Count syllables or time how long it takes you to say it. Do this a lot in your daily life so you know how big your theories generally are. Guess what your average has been. Compare to milk explanation.
  3. Consequential salience. Think of four things your theory predicts and four things that your theory disallows. If any of those eight things makes you squick, count a point. Two squicks or more means your theory is weird and you need to look for a better hypothesis. If you spend long enough trying to think of a consequence that you notice the time, your hypothesis isn't paying rent in expectation and you need to look for a better hypotheses.
  4. Remember your day looking for two pieces of confirming evidence. Remember your day looking for three pieces of disconfirming evidence. Arbitrarily decide whether the hypothesis continues to jive with new evidence. If not, new hypothesis formation.
  5. Imagine a wizard told you your new hypothesis before you tasted the spoiled milk. Imagine his clothes. Is the hypothesis sensible enough that you can trust him? Would you let him borrow a cup of sugar?
  6. You're wrong. Your hypothesis is simply wrong. Say it to yourself. Say that the milk is still fine. Imagine whether you could go about your day believing this. Can you drink the milk? If not, your hypothesis changed by a large amount and it's sensible to look for alternatives rather than sticking with your old reasoning by mental inertia.

Now the critical stage!

  1. You don't have time to remember the last four months. Don't even think about hypothesis priors unless you've already spent more than a minute trying to decide something. Milk is not a big deal, save your cognitive energy for the higher order bits of your life. Also, four months is kind of food-spoiling specific. Time frames would have to be adapted for different problems.

  2. That is not Solomonoff induction in any way. We don't even have a language for formally expressing high level concepts like "spoiled milk" unless you look at brain architecture to figure out how they classify reality. Also "compare" is not concrete enough.

  3. Emotional salience fails us badly in abstract situations. Thinking of disconfirming evidence is painful; our brains won't easily present squicky things.

  4. Arbitrarily decide is not an actionable procedure.

  5. This one actually seems kind of okay, unless you're just as likely to give sugar to senseless wizards.

  6. I'm not sure small updates have small changes in consequence value, but doing more thinking when costs are high generally doesn't seem horrible. Maybe we should add in something to keep us from thinking longer just to procrastinate though.

Conclusions! Priors over explanations are -hard-. Sometimes we naturally make new hypotheses, sometimes we just add some noise. Maybe take the outside view of yourself if you have time! Maybe take the outside view of the hypothesis by having a wizard tell it to you if not. Your strength as a rationalist? Not drinking spoiled milk, not wasting time thinking about spoiled milk, noticing squicks, successfully doubting yourself when you feel a squick, believing some things because they work really well even if they sound crazy when a wizard says them.

comment by FeatherlessBiped · 2012-01-17T03:47:59.204Z · LW(p) · GW(p)

Your strength as a rationalist is your ability to be more confused by fiction than by reality.

Yet, when a person of even moderate cleverness wishes to deceive you, this "strength" can be turned against you. Context is everything.

As Donald DeMarco asks in "Are Your Lights On?", WHO is it that is bringing me this problem?

comment by gwern · 2012-08-09T01:30:41.910Z · LW(p) · GW(p)

Alas, belief is easier than disbelief; we believe instinctively, but disbelief requires a conscious effort.

Looking through Google Scholar for citations of Gilbert 1990 and Gilbert 1993, I see 2 replications which question the original effect:

(While looking for those, I found some good citations for my fiction-biases section, though.)

comment by Serious_Shenanigans · 2013-07-11T10:29:22.057Z · LW(p) · GW(p)

Eliezer's model:

The Medical Establishment is always right.

Information given:

  • Person is feeling chest pain.
  • Paramedics say hospitalization is unnecessary.

Possible scenarios mentioned in the story:

  1. Person is feeling chest pain and is having a heart attack.
  2. Person is feeling chest pain but does not need to be hospitalized.
  3. Person is lying.

Between the model and the information given, only Scenario 1 can be ruled false; Scenarios 2 and 3 are both possible. If Eliezer is going to beat himself up for not knowing better, it should be because Scenario 3 did not occur to him -- not that Scenario 3 is the logical reality.

Replies from: falenas108
comment by falenas108 · 2014-02-10T14:24:41.598Z · LW(p) · GW(p)

The way you phrase it hides the crucial part of the story. Rephrasing:

  1. Person is telling the truth a.) They are having a heart attack, but the paramedics judged wrongly, dismissed it, and didn't take him to the hospital. b.) They are not having a heart attack, the paramedics judged rightly, and the paramedics dismissed it and didn't take him to the hospital.
  2. Person is lying.

Eliezer is saying that he should have known scenario 1 is wrong because regardless of whether or not the paramedics think it's legit, they would have taken the person to the hospital anyway. So, 1a and 1b must be wrong, leaving 2.

Or, if I were going to add to your model, I would add "The Medical Establishment always takes in the ambulance if they call for a medical reason." Then, when the information given is "Paramedics say hospitalization is unnecessary," that would have been a direct conflict between model and information, where Eliezer had to choose between rejecting the model and rejecting the information.

comment by KnaveOfAllTrades · 2014-03-03T01:03:31.998Z · LW(p) · GW(p)

I see two senses (or perhaps not-actually-qualiatively-different-but-still-useful-to-distinguish cases?) of 'I notice I'm confused':

(1) Noticing factual confusion, as in the example in this post. (2) Noticing confusion when trying to understand a concept or phenomenon, or to apply a concept.

Example of (2): (A) "Hrm, I thought I understood what, "Colorless green ideas sleep furiously" means when I first heard it; the words seemed to form a meaningful whole based on the way they fell together. But when I actually try to concretise what that could possibly mean, I find myself unable to, and notice that characteristic pang of confusion."

Example of (2): (B) "Hrm, I thought I understood how flight works because I could form words into intelligent-sounding sentences about things like 'lift' and 'Newton's third law'. But then when I tried to explain why a plane goes up instead of down, my word soup explained both equally well, and I noticed I was confused." (Compare, from the post: "I knew that the usefulness of a model is not what it can explain, but what it can't. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.")

Replies from: KnaveOfAllTrades
comment by KnaveOfAllTrades · 2014-06-07T02:56:29.726Z · LW(p) · GW(p)

It might be useful to identify a third type:

(3) Noticing argumentative confusion. Example of (3): "Hrm, those fringe ideas seem convincing after reading the arguments for them on this LessWrong website. But I still feel a lingering hesitation to adopt the ideas as strongly as lots of these people seem to have, though I'm not sure why." (Confusion as pointer to epistemic learned helplessness)

As in the parent to this comment, (3) is not necessarily qualitatively distinct (e.g. argumentative confusion could be recast as factual confusion: "Hrm, I'm confused by this hesitation I observe in myself to fully endorse these fringe ideas after seeing such seemingly-decisive arguments. Maybe this means something." (Observations of internal reaction are still observations about which one can be factually confused).

comment by [deleted] · 2015-02-10T00:34:32.757Z · LW(p) · GW(p)

Was a mistake really made in this instance? Is it not correct to conclude 'there was no problem'? Yes, the author did not realise the story was fictional; but what of what he concluded implied the story was not fictional?

Furthermore, is it good to berate oneself because one does not immediately realise something? In this case, the author did not immediately realise the story was fictional. But evidently the author was already working toward that conclusion by throwing doubt on parts of the story. And the evidence the author had was obviously inconclusive; the story could have been fictional (and the lie could have been invented at several stages), or the complainant could perhaps have simply misinterpreted chest pains as something else, or perhaps the doctors could have in fact made a mistake etc. Given all that, it seems rather after-the-fact to conclude the "Rational" conclusion one should have reached was that the story was a fiction. Surely the "Rational" conclusion would be to suspend judgement pending further investigation; or perhaps judge, but judge lightly. In any case, the self-flagellation at the end of the article seemed unnecessary. Humans are not capable of permanently being "Rational" thinkers; to get to the "Rational" conclusions, it is often best to proceed in baby-steps we can take rather than "Rational" leaps prescribed by whatever vision of "Rationality" is imagined by the author.

comment by Paradoctor · 2015-02-12T05:40:27.964Z · LW(p) · GW(p)

This looks like an instance of the Dunning-Kruger effect to me. Despite your own previous failures in diagnosis, you still felt competent to give medical advice to a stranger in a potentially life-threatening situation.

In this case, the "right answer" is not an analysis of the reliability of your friend's account, it is "get a second opinion, stat". This is especially true seeing as how you believed the description you gave above.

If a paramedic tells me "it's nothing", I complain to his or her superiors, because that is not a diagnosis. Furthermore, I don't see in your description a claim that the paramedics said there's no need to worry even if the pain becomes worse later on, so it seems sensible for you to presume they didn't. So, even if the first assessment is presumed correct, it is not admissible to think that it extends to different evidence.

And if that doesn't convince you, compute the expectation value of probably being right in a chat vs. the small chance of being sued for wrongful death times everything you own and will ever earn.

comment by MarsColony_in10years · 2015-02-21T02:53:36.016Z · LW(p) · GW(p)

Of course, it's also possible to overdo it. If you hear something odd or confusing, and it conflicts with belief that you are emotionally attached to, the natural reaction is to ignore the evidence that doesn't fit your worldview, thus missing an opportunity to correct a mistaken belief.

On the other hand, if you hear something odd or confusing, and it conflicts with belief or assumption that you aren't emotionally attached to, then you shouldn't forget about the prior evidence in light of new evidence. The state of confusion should act as a trigger mechanism telling you to tally up all the evidence, and decide which piece doesn't fit.

comment by [deleted] · 2015-02-25T20:34:55.711Z · LW(p) · GW(p)

It is a design flaw in human cognition...

Since I think evolution makes us quite fit to our current environment I don't think cognitive biases are design flaws, in the above example you imply that even if you had the information available to guess the truth, your guess was another one and it was false, therefore you experienced a flaw in your cognition.

My hypotheses is that reaching the truth or communicating it in the IRC may have not been the end objective of your cognitive process, in this case just to dismiss the issue as something that was not important anyway "so move on and stop wasting resources in this discussion" was maybe the "biological" objective and as such it should be correct, not a flaw.

If the above is true then all cognitive bias, simplistic heuristics, fallacies, and dark arts are good since we have conducted our lives for 200,000 years according to these and we are alive and kicking.

Rationality and our search to be LessWrong, which I support, may be tools we are developing to evolve in our competitive ability within our species, but not a "correction" of something that is wrong in our design.

Replies from: Nornagest, None, TheAncientGeek
comment by Nornagest · 2015-02-25T21:31:05.959Z · LW(p) · GW(p)

See, this is why it's a bad idea to use the language of design when talking about evolution. Evolution doesn't have a design. It optimizes locally according to a complex landscape of physical and sexual incentives, and in the EEA that usually would have favored fast and frugal heuristics. Often it still does; if you're driving a car or running away from a bear, you don't want to drop what you're doing and work out the globally optimal path before taking action. That's all well and good.

But things have changed in the last 12,000 years; we spend more time doing long-range planning and optimization work, for example, and less time running away from tigers and hitting each other on the head with clubs. Evolution works slowly, and we haven't reached a local maximum for our environment yet, nor are we likely to in the near future as we continue to reshape it; we're left with a set of cognitive tools, therefore, that are often poorly adapted to our goals. It's these that we seek to compensate for, when and where doing so is appropriate.

While our goals are informed by biology, though, their biological influences are no "truer", no more "correct", than any other. We certainly shouldn't treat them as gospel; if they turn out to be in tension with the environment, as in many cases they have, evolution will be quite happy to select against them.

comment by [deleted] · 2015-02-25T21:44:00.795Z · LW(p) · GW(p)

Since I think evolution makes us quite fit to our current environment I don't think cognitive biases are design flaws

They're design flaws insofar as that there are far better possibilities. Just because something doesn't fail entirely, doesn't mean its design is any good.

If the above is true then all cognitive bias, simplistic heuristics, fallacies, and dark arts are good since we have conducted our lives for 200,000 years according to these and we are alive and kicking.

This is the same as above. This might also be relevant.

Rationality and our search to be LessWrong, which I support, may be tools we are developing to evolve in our competitive ability within our species, but not a "correction" of something that is wrong in our design.

Many of us do not (consciously) want to gain competitive advantages compared to other people but rather raise the sanity waterline.

comment by TheAncientGeek · 2015-02-26T12:01:19.376Z · LW(p) · GW(p)

If the above is true then all cognitive bias, simplistic heuristics, fallacies, and dark arts are good

Good for survival, but not for truth seeking. Epistemic and instrumental rationality are difference things.

Replies from: dxu
comment by dxu · 2015-02-26T23:39:55.003Z · LW(p) · GW(p)

And even in terms of survival, human neurology isn't that great. It was good enough to get our species to survive until now, but it's nowhere close to optimal.

comment by [deleted] · 2015-08-08T06:33:18.939Z · LW(p) · GW(p)

Is EY saying that if something doesn't feel right, it isn't? I've been working on this rationalist koan for weeks and can't figure out something more believable! I feel like a doofus!

Replies from: Wes_W
comment by Wes_W · 2015-08-08T06:39:39.544Z · LW(p) · GW(p)

No. Two possibilities, not just one:

"EITHER YOUR MODEL IS FALSE OR THIS STORY IS WRONG."

comment by Thomas Eisen (thomas-eisen) · 2019-10-10T10:16:21.127Z · LW(p) · GW(p)

This article actually made me question „Wait, is this even true?“ when I read an article with weird claims; then I research whether the source is trustworthy and sometimes, it turns out that it isn‘t

comment by Nathan Young · 2024-04-19T16:26:25.049Z · LW(p) · GW(p)

Trying to understand this.

I *knew* that the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.

I think what Yud means there is that a good model will break quickly. It only explains a very small set of things because the universe is very specific. So it's good that it doesn't explain many many things.

It's a bit like David Deutsch arguing that models should be sensitive to small changes.  All of their elements should be important.