Do We Believe Everything We're Told?

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-10T23:52:46.000Z · LW · GW · Legacy · 41 comments

Contents

41 comments

Some early experiments on anchoring and adjustment tested whether distracting the subjects—rendering subjects cognitively “busy” by asking them to keep a lookout for “5” in strings of numbers, or some such—would decrease adjustment, and hence increase the influence of anchors. Most of the experiments seemed to bear out the idea that being cognitive busy increased anchoring, and more generally contamination.

Looking over the accumulating experimental results—more and more findings of contamination, exacerbated by cognitive busyness—Daniel Gilbert saw a truly crazy pattern emerging: Do we believe everything we’re told?

One might naturally think that on being told a proposition, we would first comprehend what the proposition meant, then consider the proposition, and finally accept or reject it. This obvious-seeming model of cognitive process flow dates back to Descartes. But Descartes’s rival, Spinoza, disagreed; Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration.

Over the last few centuries, philosophers pretty much went along with Descartes, since his view seemed more, y’know, logical and intuitive.1 But Gilbert saw a way of testing Descartes’s and Spinoza’s hypotheses experimentally.

If Descartes is right, then distracting subjects should interfere with both accepting true statements and rejecting false statements. If Spinoza is right, then distracting subjects should cause them to remember false statements as being true, but should not cause them to remember true statements as being false.

Gilbert, Krull, and Malone bear out this result, showing that, among subjects presented with novel statements labeled true or false, distraction had no effect on identifying true propositions (55% success for uninterrupted presentations, vs. 58% when interrupted); but did affect identifying false propositions (55% success when uninterrupted, vs. 35% when interrupted).2

A much more dramatic illustration was produced in followup experiments by Gilbert, Tafarodi, and Malone.2 Subjects read aloud crime reports crawling across a video monitor, in which the color of the text indicated whether a particular statement was true or false. Some reports contained false statements that exacerbated the severity of the crime, other reports contained false statements that extenuated (excused) the crime. Some subjects also had to pay attention to strings of digits, looking for a “5,” while reading the crime reports—this being the distraction task to create cognitive busyness. Finally, subjects had to recommend the length of prison terms for each criminal, from 0 to 20 years.

Subjects in the cognitively busy condition recommended an average of 11.15 years in prison for criminals in the “exacerbating” condition, that is, criminals whose reports contained labeled false statements exacerbating the severity of the crime. Busy subjects recommended an average of 5.83 years in prison for criminals whose reports contained labeled false statements excusing the crime. This nearly twofold difference was, as you might suspect, statistically significant.

Non-busy participants read exactly the same reports, with the same labels, and the same strings of numbers occasionally crawling past, except that they did not have to search for the number “5.” Thus, they could devote more attention to “unbelieving” statements labeled false. These non-busy participants recommended 7.03 years versus 6.03 years for criminals whose reports falsely exacerbated or falsely excused.

Gilbert, Tafarodi, and Malone’s paper was entitled “You Can’t Not Believe Everything You Read.”

This suggests—to say the very least—that we should be more careful when we expose ourselves to unreliable information, especially if we’re doing something else at the time. Be careful when you glance at that newspaper in the supermarket.

PS: According to an unverified rumor I just made up, people will be less skeptical of this essay because of the distracting color changes.

1See Robin Hanson, “Policy Tug-O-War,” Overcoming Bias (blog), 2007, http://www.overcomingbias.com/2007/05/policy_tugowar.html.

2Daniel T. Gilbert, Douglas S. Krull, and Patrick S. Malone, “Unbelieving the Unbelievable: Some Problems in the Rejection of False Information,” Journal of Personality and Social Psychology 59 (4 1990): 601–613.

3Daniel T. Gilbert, Romin W. Tafarodi, and Patrick S. Malone, “You Can’t Not Believe Everything You Read,” Journal of Personality and Social Psychology 65 (2 1993): 221–233.

41 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Nick_Tarleton · 2007-10-11T00:19:40.000Z · LW(p) · GW(p)

"Some reports contained false statements that exacerbated the severity of the crime"

Should "false" be highlighted here?

This is scary.

comment by Constant2 · 2007-10-11T00:40:48.000Z · LW(p) · GW(p)

Spinoza's view seems on the face of it much more likely than Descartes's, because it is much easier to implement. Anyone who has programmed knows that the easiest way to write a program to deal with an input is just to accept it, and that a check can be computationally expensive. Furthermore, how is one to understand a sentence without at least modeling the belief that the sentence is intended to elicit, so that one might at least understand what it means (the sentence itself is merely a character/phoneme string and so does not yield meaning intrinsically), and the obvious and readily available way to model such a belief is to actually enter it. Much easier simply to enter into that actual brain state associated with the belief and add maybe a flag to mark it as nonserious, than to enter into a wholly different state. We may infer from child studies that the higher order skill of contemplating a belief without holding it is not immediately acquired, for it is only at age 4 or so (I think) that a child is able to understand that others have beliefs that differ from reality.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-03-18T02:59:42.603Z · LW(p) · GW(p)

Sure, you have to catch the baseball, but that doesn't mean you have to eat the baseball

comment by Michael_Rooney · 2007-10-11T01:53:20.000Z · LW(p) · GW(p)

Did you just believe that Descartes was modeling "cognitive-process flow" because some psychologist told you so? Or is possible that Descartes was, y'know, prescribing how rationalists should approach belief, rather than how we generally do?

Replies from: gwern
comment by gwern · 2013-05-14T01:46:09.317Z · LW(p) · GW(p)

No, it's not possible, as one would know if one had 'just', 'y'know', looked up the citations in the papers and read what Descartes himself said in his Fourth Meditation:

Whereupon, regarding myself more closely, and considering what my errors are (which alone testify to the existence of imperfection in me), I observe that these depend on the concurrence of two causes, viz, the faculty of cognition, which I possess, and that of election or the power of free choice,—in other words, the understanding and the will. For by the understanding alone, I [neither affirm nor deny anything but] merely apprehend (percipio) the ideas regarding which I may form a judgment; nor is any error, properly so called, found in it thus accurately taken.

...the power of will consists only in this, that we are able to do or not to do the same thing (that is, to affirm or deny, to pursue or shun it), or rather in this alone, that in affirming or denying, pursuing or shunning, what is proposed to us by the understanding, we so act that we are not conscious of being determined to a particular action by any external force.

Seems pretty clearly descriptive and not normative... no 'should' about it.

comment by AnneC · 2007-10-11T03:52:40.000Z · LW(p) · GW(p)

Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration.

That sounds like what Sam Adams was saying at the Singularity Summit -- the idea of "superstition" being essential to learning in some respects.

comment by Jeremy_McKibben-Sanders · 2007-10-11T04:45:03.000Z · LW(p) · GW(p)

This reminds me of a proof I was working on the other day. I was trying to show that a proposition (c) is true, so I used the following argument.

If (1) is true, then either (a) is true or (c) is true. If (2) is true, then either (b) is true or (c) is true. (a) and (b) cannot both be true. (1) and (2) are true, so therefore (c) must be true.

This seems to follow Descartes' model of consideration and then acceptance of the proposition (c). However, I could have saved myself about half a page of space if I had simply started out by rejecting (c) and then waiting for a contradiction to "appear."

Of course this is quite the opposite of the Spinoza model, but like Constant said, it makes sense that you can save time and brain power by actively modeling a belief and then seeing what follows. As for why acceptance is the default, I'm not exactly sure. Perhaps it is simply quicker to accept a proposition rather than to waste time looking for its opposite.

comment by Anna3 · 2007-10-11T05:13:26.000Z · LW(p) · GW(p)

So doesn't this tie in well with your previous article about the denier's dilemma? It seems, if Gilbert/Spinoza are right, that the CDC mythbusters problem of people mis-remembering as "true" the myths presented by the CDC, is an example of this mechanism (strengthened by reinforcement effects of re-encountering the myth).

comment by Hugo_Mercier · 2007-10-11T13:23:21.000Z · LW(p) · GW(p)

I would just like to point out that this paper: http://www.blackwell-synergy.com/doi/abs/10.1111/j.0956-7976.2005.01576.x titled 'believe it or not' claims to refute the strongest of Gilbert's ideas (and rightly so in my view)

Replies from: gekaklam
comment by gekaklam · 2020-12-14T10:48:27.841Z · LW(p) · GW(p)

The link from this reply now posts to a steroids page. From the DOI in the link, I found the article here:
https://journals.sagepub.com/doi/abs/10.1111/j.0956-7976.2005.01576.x

(for anyone interested and still looking at these comments 13y later ;-) )

comment by Sebastian_Hagen2 · 2007-10-11T16:59:25.000Z · LW(p) · GW(p)

One of the most obvious examples of commonly encountered unreliable information are advertisements. Gilbert's results suggest that knowing that the information in advertisements is highly unreliable doesn't make you immune to their effects. This suggests that it's a good idea to avoid perceiving advertisements entirely, especially in situations where you're trying to concentrate on something else. The obvious way to do this is to aggressively use ad-blockers wherever possible; unfortunately there are still media where this isn't practical.

comment by Kaj_Sotala · 2007-10-11T18:35:52.000Z · LW(p) · GW(p)

What about statements that are so loaded to their listeners that they're rejected outright, with seemingly no consideration? Are they subject to the same process (and have such outrageous implications that they're rejected at once), or do they work differently?

Replies from: tiarat2
comment by tiarat2 · 2021-05-28T10:45:02.943Z · LW(p) · GW(p)

This paper gives a logical account. Excerpt - 

To take a more extreme example, when we read the following:

You’re a tree.

we must believe this claim as we do. But our belief only lasts the fraction of a second that it takes us to conclude that we're not a tree, and we'll therefore likely have no recollection of it.

So, although we always believe a claim upon it entering our mind - whether it was produced by our own mind or someone else’s - that belief can also then be replaced with equal ease, and possibly by an immediately preceding belief, and possibly within such a short period of time that we have no recollection of our brief belief. Therefore, upon being presented with this theory of belief formation, and then thinking about how we form beliefs in practice, we may falsely recall, or imagine, cases of us not believing claims upon them entering our mind. Also, as will be explained in part seven, the briefness of such beliefs is one of several reasons why our belief of every claim that enters our mind doesn't naturally come to our attention.

To think X is to believe X

Regarding claims, as explained, if claim X exists in our mind, then we must be either thinking X or thinking about X. Therefore, as we're simply thinking 'There's milk in the fridge', we're not thinking about this claim. That is, we're simply thinking about the existence of milk in the fridge, and not about this claim about the existence of milk in the fridge. Therefore, as we're simply thinking this claim, its content can't exist in our mind as the content of a claim, because that would involve thinking about the claim. And if, as we're simply thinking this claim, its content doesn't exist in our mind as the content of a mere claim - a mere representation - then the only other logical possibility is that it exists in our mind as reality. And to say that the content of a claim exists in our mind as reality is to say that we believe it. Therefore, simply thinking 'There's milk in the fridge' involves believing that there's milk in the fridge. And the same logic applies to our thinking any claim: thinking claim X necessarily involves believing X.

Replies from: Self_Optimization
comment by Self_Optimization · 2021-08-10T20:27:53.324Z · LW(p) · GW(p)

This link seems to be assuming that one's prior internal state does not influence the initial mental representation of data in any way. I don't have any concrete studies to share refuting that, but let's consider a thought experiment.

Say someone really hates trees. Like 'trees are the scum of the earth, I would never be in any way associated with such disgusting things' hates trees. It's such a strong hate, and they've dwelled on it for so long (trees are quite common, after all, it's not like they can completely forget about them), that it's bled over into nearly all of their subconscious thought patterns relevant to the subject. 

I would think it plausible that the example claim in the article you link wouldn't reach whatever part of this person's brain/mind encodes beliefs in the form "You're a tree". Instead, their subconscious would transform the input into "<dissonance>You're a <disgust>tree</disgust>.</dissonance>". Or perhaps the disgust at the term tree would inherently add the dissonance while the sentence was still being constructed from its constituent words.
Just as their visual recognition and language systems are translating the patterns of black and white into words and then a sentence before they reach their belief system, their preexisting emotional attachments would automatically be applied to the mental object before it was considered, causing their initial reaction to be disbelief rather than belief.

It may be more accurate to say we believe everything we think, even if only for a moment; and in most cases we do think what we read/hear in the instant we're perceiving it. But when the two are different I'd expect even our instantaneous reactions to reflect the actual thought, rather than the words that prompted it.

comment by Constant2 · 2007-10-11T19:57:58.000Z · LW(p) · GW(p)

Contrary to what many seem to believe, I consider advertising to be one of the least harmful sources of unreliable information. For one thing, the cacophony of advertisements send us contradictory messages. "Buy my product." "No, buy my product." One might argue that even such contradictory messages have a common element: "buy something". However, I have not noticed that I spend less money now that I hardly ever put myself at the mercy of television advertising, so I have serious doubts about whether advertising genuinely increases a person's overall spending. I notice, also, that I do not smoke, even though I have seen plenty of advertisements for particular brands of cigarettes. The impact of all those cigarette advertisements on my overall spending on cigarettes has evidently been minimal.

For another, the message itself seems not all that harmful in most cases. For example, suppose that advertising is ultimately the reason that I buy Tide detergent rather than another brand of detergent. How much am I harmed by this? The detergents all do pretty much the same thing.

And in many specific cases, where people's behavior has been blamed on the nefarious influence of advertising, what I generally see is that the accuser has curiously neglected some alternative, very likely explanations. Smoking is attractive because it delivers a drug. Smoking was popular long before it was advertised. I suspect that no more than a very small fraction of smokers started smoking because of advertising.

comment by TGGP4 · 2007-10-11T21:00:29.000Z · LW(p) · GW(p)

I have heard that advertising mainly shifts consumers from one brand to another. In that sense it is wasteful and an economist could give an argument for taxing it. I happen to like the subsidy of media by advertisements, so I wouldn't advocate it.

comment by NancyLebovitz · 2007-10-12T00:44:59.000Z · LW(p) · GW(p)

If people are that much more trusting when they're distracted, then it's important not to multi-task if you need to evaluate what you're looking at. Maybe it's just important to not multi-task.

comment by Nick_Tarleton · 2007-10-12T02:04:48.000Z · LW(p) · GW(p)

In addition to advertisements, should we avoid fiction when we're distracted?

Replies from: taryneast, bruno-mailly
comment by taryneast · 2011-02-20T11:45:16.074Z · LW(p) · GW(p)

A good question (though I suspect the simple answer is "no").

It also brings up the question of whether this is why we generalise from fictional evidence so often.

comment by Bruno Mailly (bruno-mailly) · 2021-12-28T16:36:58.522Z · LW(p) · GW(p)

More generally, it seems we should be avoiding anything while distracted.

It makes sense that it would mess our learning, as it makes attributing cause & consequence confusing.

But it may also mess replaying our learned skills, as it is a big cause of accidents.

comment by nick2 · 2007-10-13T00:07:48.000Z · LW(p) · GW(p)

"Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration."

Whether this view is more accurate than DesCartes' view depends on whether the belief in question is already commonly accepted. When in the typical situation a typical person Bob says "X is Y, therefore I will perform act A" or "X should be Y, therefore we should perform act A", Bob is not making a statement about X or Y, he is making a statement about himself. All the truth or reality that is required for Bob to signal his altruism is that it be probable that he believes that X is Y or that X should be Y. The probability of this belief depends far more on what else Bob and his peers believe than it does about the reality or truth of "X is Y".

comment by Gordon_Worley · 2007-10-13T13:40:17.000Z · LW(p) · GW(p)

Between teaching mathematics to freshmen and spending most of my time learning mathematics, I've noticed this myself. When presented with a new result, the first inclination, especially depending on the authority of the source, is to believe it and figure there's a valid proof of it. But occasionally the teacher realizes that they made a mistake and may even scold the students for not noticing since it is incredibly obvious (e.g. changing something like ||z - z_0|| to ||z - z_1|| between steps, even though a few seconds thinking reveals it to be a typo rather than a mathematical insight).

Sometimes (and for a few lucky people, most of the time) individuals are in a mental state where they are actively thinking through everything being presented to them. For me, this happens a few times a semester in class, and almost always during meetings with my advisor. And occasionally I have a student who does it when I'm teaching. But in my experience this is a mentally exhausting task and often leaves you think-dead for a while afterwards (I find I can go about 40 minutes before I give out).

All this leads me to a conclusion, largely from my experience with what behavior produces what effects, that in mathematics the best way to teach is to assign problems and give students clues when they get stuck. The problems assigned, of course, should be ones that result in the student building up the mathematical theory. It's certainly more time consuming, but in the end more rewarding, in terms of both emotional satisfaction and understanding.

Replies from: Elizabeth
comment by Elizabeth · 2010-11-28T06:38:25.866Z · LW(p) · GW(p)

As someone who spends a lot of time on the student side of those math classes (and as the student in the class who almost always catches those typographical errors), I suspect that there are students who notice the error but don't comment for social reasons (don't want to interrupt, don't want to be a know-it-all, don't want to be publicly erroneous in a correction, etc.). Your solution of giving students problems, while an excellent teaching tool, is not a particularly good test for this phenomenon because it fails to distinguish between students who really do miss the errors because they assume you are right and the students who noticed but didn't speak up, or those who simply weren't paying attention in the first place.

Replies from: taryneast
comment by taryneast · 2011-02-20T11:48:57.008Z · LW(p) · GW(p)

I agree. I think the social-pressure aspect is even more exaggerated in business settings where there are not only no rewards for pointing out errors, but where you are often actively chastised for causing a team-member to lose face.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-02-20T12:42:38.696Z · LW(p) · GW(p)

'Nuff said

This was put up approvingly by two people on my friendslist.

Replies from: taryneast
comment by taryneast · 2011-02-20T14:47:51.913Z · LW(p) · GW(p)

Brilliant blogpost, and quite correct.

There are certainly situations in which the pointing out of errors is not socially appropriate, and doesn't win you any friends.

When somebody's telling a joke or an interesting anecdote, you'll often find that nobody cares if the premises are correct. You'll tend to get along better if you bite your tongue - even if it is the 500th time you've heard that "you only use 10% of your brain" (for instance).

However... I do tend to find that getting along with people that don't want to know the truth is more energy-draining (for me)... just as I'm sure that if I let my own natural preference for truth take over... I'd be draining for them.

I find that "getting along with non-rational/truth-preferring people" is a tough skill... and involves a lot of compromise.

I'd love to see more articles on how to do this successfully (without going insane or compromising your values).

Also I'd like to point out that there really are situations in which you really do have to point out that somebody is just plain wrong... despite how uncomfortable it makes the other person feel.

That while the article is quite right that being patronising is not beneficial... there are many situations where "being right" is not about being patronising, but about making sure all the bases are covered.

This is often where IT-people clash with people such as their managers. Because really, sometimes code just can't do what they're asking, no matter how much they'd like us to "put on a can-do attitude".

Similarly, clients can give ambiguous or flat-out contradictory requirements... and these errors must be pointed out, regardless of whether the person loses face by doing so. because IT have to make a profit just as much as the client does, and these kinds of errors are where later disputes arise. Nipping it in the bud by pointing out they're wrong is the best thing for your long-term survivability here.

Of course - there are ways and means of doing so to make sure that egos aren't bruised int he process... but that's another article (or two), I'm sure. :)

Replies from: Omegaile
comment by Omegaile · 2013-04-02T18:12:18.108Z · LW(p) · GW(p)

I think the blog post was basically speaking in favor of the charity principle.

Replies from: taryneast
comment by taryneast · 2013-04-10T23:20:25.977Z · LW(p) · GW(p)

I don't think I agree on that one.

The article isn't about choosing the reinterpret the other person's statements in a more favourable light.

It's about not sweating the small stuff and not drawing attention your way and letting somebody else have fun without ruining it with detail that, in this social situation is not actually necessary.

comment by Tim_Freeman · 2008-04-28T12:01:24.000Z · LW(p) · GW(p)

Hugo Mercier's citation above for "Believe it or Not" by Hasson et al. wants money to give you the article. The article is available for free from Hasson's home page at:

http://home.uchicago.edu/~uhasson/

The direct URL is:

http://home.uchicago.edu/~uhasson/Belief.pdf

Replies from: bigjeff5
comment by bigjeff5 · 2011-02-15T23:08:37.485Z · LW(p) · GW(p)

Update:

Hasson's home page: http://hasson.org

Direct URL for paper: http://www.behaviometrix.com/public_html/Hasson.belief.pdf

comment by Doug3 · 2008-10-21T18:22:28.000Z · LW(p) · GW(p)

I, for one, found the color changing text completely persuasive.

Replies from: MarkusRamikin, Martok
comment by MarkusRamikin · 2011-06-24T14:07:27.266Z · LW(p) · GW(p)

Funny, my brain just assumed it's all broken hyperlinks or something, and until the PS I didn't consciously realize there were any colors in the article.

comment by Martok · 2012-04-05T22:33:36.229Z · LW(p) · GW(p)

Me too, but I fear I may be primed to believing Eliezer as his previous posts contained stuff that I heard about before, granting him some advantage. Or it may be Authority...

Anyway: I find it interesting that a german newspaper mostly known for being the lowest form of journalism imaginable (but still highest-grossing) uses a similar technique in their "articles": they print more or less randomly chosen fragments in bold or italics. Could using confusing fonts really be enough to get people to "believe everything"?

Something else I noticed: all highlighted phrases in this article are negative. This may have primed against the postive effects here. Somebody should test this.

comment by roland · 2009-06-02T18:48:15.631Z · LW(p) · GW(p)

I'm amazed that Spinoza got it right at that time.

Replies from: diegocaleiro
comment by diegocaleiro · 2010-03-05T05:57:30.468Z · LW(p) · GW(p)

There are some millions of pages written by old philosophers, sure people can find true stuff that they guessed. This does not mean we should be amazed. We are not having available at the moment we become amazed the non-amazing fact that Spinoza made 2367 mistakes in his written life. I'm as amazed by Spinoza as I am amazed by Nostradamus. It is not zero, but it wouldn't pay a book.

comment by mat33 · 2011-10-08T02:52:50.575Z · LW(p) · GW(p)

Well, no modern dictator I know off understimates mass-media.

And basic rights and freedoms, where they do work at all, do tend to work against excluding your opponents as information source of the majority.

comment by MarsColony_in10years · 2015-03-29T02:46:39.202Z · LW(p) · GW(p)

I'm not sure I'd interpret the results quite like that. "We believe everything we're told" seems like a bit of an exaggeration. I don't have a deep-seated, strong belief that 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1 ≈ 2,250. That's just a quick guess, based on the information currently floating around in my skull. If you asked me for another guess tomorrow, I might give a radically different answer.

It seems like we just encounter a lot of information over the years, and it all gets tossed into the giant box that is our skull. Then something comes up (something we see or hear, a word, an idea we have... anything) and our brain quickly rummages through the box for related concepts. It's not a comprehensive search by any means; it's just a quick search that is heavily bias toward concepts at the top of the box (those added or used most recently). This is generally a useful bias, since it's likely to turn up relevant information quickly.

If some of the concepts that come up during the search have a [FALSE] tag attached, we'll ignore them, or maybe even treat them as counter-evidence to whatever we're evaluating. The problem is that sometimes we're only half-listening when we encountered certain information, and never attached a [FALSE] tag. Or maybe the [FALSE] tag wasn't attached well enough to stick. For example: "I remember two of my geeky friends arguing about whether glass was a slow-flowing liquid or a true solid, but I forget who wound up being correct when they finally googled it."

But there are all sorts of other things attached to each bit of knowledge that's floating around in our brain, besides just a simplistic [FALSE] tag. We can remember where we heard it (college class, hearsay, scifi book, newspaper, pier-reviewed publication, etc.) and maybe even how we felt about it at the time (Were we surprised to learn it? Still skeptical afterward?). Ideally, we'll remember a lot of supporting evidence and ideas, and a few attempts to prove the notion false and how the tests failed.

The things we think of as our core beliefs tend not to be made up of only random hearsay. They tend to be based on ideas we are pretty sure about. They may have accumulated a bunch of week supporting evidence in addition, over the years, due to confirmation bias. Even weaker beliefs (like those based on some source we read once and were pretty sure was reputable) require a basic amount of evidence.

Perhaps my argument is only about the meaning of the word "belief". After all, it seems arbitrary to declare some standard for our guesses at which point we are willing to call one a belief instead of a best guess. But in practice, that seems to be exactly what we do. I try to set my bar fairly high, and reserve judgement on a situation until I'm reasonably confident, but other people seem willing to form opinions on very little evidence, at the risk of turning out to be wrong. And That's fine, so long as our opinions are still evidence based. It doesn't matter if the threshold is p>.99 or p>.95 or even p>.75, so long as we can agree on p and base our decisions on it.

But concentrating on errors, fallacies, heuristics, and biases that affect mainly our guesses seems like it would have limited value. Perhaps they are a way of catching errors early, before they propagate into deeply held beliefs. Or perhaps they would be useful for avoiding continuously adding small bits of support to our deeper beliefs (a form of confirmation bias). It would be extremely interesting to do a longitudinal case study, and track the development of a bad idea, from formation to conclusion. Say, from the journal of someone who came to believe in conspiracy theories or something similar. I wonder to what degree our natural human biases influence the long-term development of our opinions.

comment by xSciFix · 2019-05-01T16:31:18.580Z · LW(p) · GW(p)

Well I suppose I'm not going to be idly reading random tabloid headlines while waiting in the checkout line anymore for starters.

So is it possible to train one's brain such that it reflexively employs the Decartes method, as it were?

comment by KrisChibroski · 2021-03-20T03:33:26.610Z · LW(p) · GW(p)

I hate our brains sometimes, mine in particular. I just read "According to an unverified rumor I just made up, people will be less skeptical of this essay because of the distracting color changes." and thought, "Hmmm, I wonder if that could be true." When the first part literally said ACCORDING TO AN UNVERIFIED RUMOR I JUST MADE UP! It is embarrassing to say that it took me reading that more than once to realize how I just glossed over the first part, or just separated it from the rest of the text as two separate entities. First part: hehe funny joke. Second part: stated like a fact so that I did indeed "passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration." 🤦🏼‍♀️

comment by momom2 (amaury-lorin) · 2024-11-04T16:56:44.148Z · LW(p) · GW(p)

distraction had no effect on identifying true propositions (55% success for uninterrupted presentations, vs. 58% when interrupted); but did affect identifying false propositions (55% success when uninterrupted, vs. 35% when interrupted)

If you are confused by these numbers (why so close to 50%? Why below 50%) it's because participants could pick four options (corresponding to true, false, don't know and never seen). 
You can read the study, search for keyword "The Identification Test".