What are some good examples of fake beliefs?

post by Adam Zerner (adamzerner) · 2020-11-14T07:40:19.776Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    6 ChristianKl
    2 adamzerner
    -2 FraserOrr
None
1 comment

I'm reading through Rationality from AI to Zombies again and just finished the section on Fake Beliefs [? · GW]. I understand the ideas and now I'm trying to ask myself whether the ideas are useful.

To me, the biggest reason for them being useful is if I myself have some fake beliefs, it'd be good to identify and get rid of them. I don't think I do though.

Another reason why they might be useful is to identify them in others and better understand what is going on. Actually, this doesn't seem particularly useful, but it at least is somewhat interesting/satisfying. Anyway, I'm having trouble thinking of good examples of fake beliefs in others. The examples in the blog posts seem pretty contrived and perhaps uncharitable to what is actually going on in their heads.

So, to wrap this question up, I am most interested in hearing about examples of fake beliefs that rationalists might be prone to having, but I am also interested in hearing any other examples.

Answers

answer by ChristianKl · 2020-11-15T18:17:43.262Z · LW(p) · GW(p)

I thought a bit about this question, and it's worth noting that it got no engagement 19 hours after posting when I started writing this comment.

I don't use "fake belief" as a category for myself when dealing with beliefs and it's not a category that EY proposed in those sequences posts either. It's just just as a chapter heading to bundle a bunch of related essays together.

An experience of dealing with a potentially dubious belief I dealt with recently was "corruption is always bad". I noticed some information that pointed to doubt to that belief. 

When I look at it I noticed that it has the features of a belief that's taught early in a moralizing way before rational inquiry gets developed. There are strong social pressures for believing in the belief. While those are warning signs neither of those are direct evidence for the belief being wrong. Not really knowing what to do with it I posted an LW question [LW · GW]. The topic seemed to be triggering enough that one commentor didn't want ot engage with the meat of the discussion and just say "corruption is really bad" because it's important to depend that belief.

I don't think the category of "fake belief" would be helpful for dealing with that struggle around that question.

When it comes to the category of "belief in belief", I think there are examples that are typical for rationalists. "How to Measure Anything" is for example a book that gets people into a state where they believe that everyone should read it because they think the argument is internally consistent but not because it got them to improve their lifes by starting to measure things or they have example of quantification of uncertainty for major decisions being very valuable.

Bayesianism can be in that category for some people. Any of the applied rationality interventions can be advocated out of "belief in belief".

Some people believe that because OpenAI has as part of it's mission fighting AI risk that OpenAI actually fights AI risk. It's easy to believe (especially for employees) and hard to think about what actually reduces AI risk (and I'm hear not making a judgement call whether or not OpenAI increases or decreases AI risk). 

comment by Adam Zerner (adamzerner) · 2020-11-15T23:30:04.117Z · LW(p) · GW(p)

I don't use "fake belief" as a category for myself when dealing with beliefs and it's not a category that EY proposed in those sequences posts either. It's just just as a chapter heading to bundle a bunch of related essays together.

Eliezer says/implies that real beliefs have to be about anticipation control, and that you can call any other beliefs fake/improper. From Belief as Attire [? · GW]: 

"I have so far distinguished between belief as anticipation-controller, belief in belief, professing, and cheering. Of these, we might call anticipation-controlling beliefs 'proper beliefs' and the other forms 'improper beliefs'."

I don't think the category of "fake belief" would be helpful for dealing with that struggle around that question.

Yeah, it sounds like "corruption is always bad" involves anticipation control and making predictions. Something along the lines of "there aren't any cases where the utilitarian consequences of corruption are a net positive".

When it comes to the category of "belief in belief", I think there are examples that are typical for rationalists. "How to Measure Anything" is for example a book that gets people into a state where they believe that everyone should read it because they think the argument is internally consistent but not because it got them to improve their lifes by starting to measure things or they have example of quantification of uncertainty for major decisions being very valuable.

I think that "everyone should read the book" is a legitimate belief, because it implies the prediction that the benefits of reading the book are worthwhile. Whether or not it's true is different from whether it is a legitimate, anticipation-controlling belief.

For this to be an example of belief in belief it'd have to be something like "I say that everyone should read it but I don't actually anticipate that reading it will be worthwhile for people".

Replies from: ChristianKl
comment by ChristianKl · 2020-11-16T10:45:40.490Z · LW(p) · GW(p)

I think that "everyone should read the book" is a legitimate belief, because it implies the prediction that the benefits of reading the book are worthwhile. 

It's possible to believe that everyone should read a certain book because you are making predictions that reading the book will have certain consequences. 

It's also possible to believe that everyone should read a certain book without making clear predictions that open yourself up to experiences that falsify your beliefs that everyone should read this book. 

It's quite easy to adopt these kind of value signaling beliefs in a way that doesn't make predictions. 

answer by Adam Zerner (adamzerner) · 2020-11-16T06:37:43.955Z · LW(p) · GW(p)

Hell seems like a great example of a fake belief. If people actually anticipated being burned and tortured throughout eternity as a punishment for sinning, well, I can't imagine them ever sinning. Let alone sinning as often as they do in practice.

I vaguely recall hearing that in the past, people actually believed in hell and accordingly were neurotic about doing everything they can to avoid sinning.

Heaven seems like another example for similar reasoning, although to less of an extent. I think magnitude to which people are drawn to eternal bliss is less than the magnitude for which they are drawn away from eternal torture.

comment by Gordon Seidoh Worley (gworley) · 2020-11-20T03:35:30.744Z · LW(p) · GW(p)

This seems to point to a general category of beliefs necessary to prop up deontological ethics as possibly useful fake beliefs in that they help motivate you to follow norms (which are hopefully good norms).

answer by FraserOrr · 2020-11-15T20:00:13.102Z · LW(p) · GW(p)

I have read neither the book nor the chapter you refer to, but I wll comment on fake beliefs: they are evidently beneficial, or else they would not exist, and I think that is because the goal of humans is not the achievement of rationality, but other things (Maslow, for example). Rationality may lead to those benefits, but it is only one way.

Let me offer you two specific examples:

The belief that the earth was created 6,000 years ago

Here in the USA a disturbingly large percentage of people hold this belief, something that is plainly not true, since all the evidence in earth and life science point to this being an underestimate of nearly 1 millionth of the correct value.

So why do so many people hold that belief. I think there is good reason to do so from a simple cost benefit analysis. On the one hand what exactly is the cost of this belief? Almost no decision you make in your day to day life depends on the correct value of the age of the earth. I think the primary down side is that you might be mocked, or held in low esteem for such a believe. But let me return to that in a moment.

What is the benefit of such a belief? Well people believe that because the Bible and the teachers of the Bible say so. So by conforming to this believe they have access to religion and the benefits of religion.  What are those? A lovely social group to be part of. A caring community to support you. An outlet for your charitable instincts. A pre-created moral code to help you resolve challenging moral issues. A community for your children that you think trustworthy and a good place to learn good things. It gives you a sense that you are not some bit of ephemeral fluff in the universe, but are significant and important. And a eschatological view that both promises long term victory and that gives you comfort in both the death of loved ones and of yourself.

As to the concern about "being held in low esteem", one of the brilliant features of religion is that this, which seems a negative, can be made into a positive, specifically the sense of "persecution" which drives you more strongly to your group attachment. It further emphasizes your "special-ness", as one of the persecuted ones.

Nobody who is not religious (Christian mainly, but probably other Abrahamic religions too -- I'm not knowledgeable enough to comment) holds this view, but many who are do, because, quite rationally, the benefits outweigh the costs by a quite considerable amount.

The attraction of women to "bad boys"

This is of course a gross generalization, but there is no doubt that some cohort of women tend to be attracted to "bad boys". Why is that? I think in this case a rather different calculation is going on, this time much lower in the cognitive stack. 

The downside of a bad boy is manifest -- he is more likely to treat her poorly (that for example, a worshipful "nice guy", something that is often unattractive to women), he is more likely to be violent against her, he is more likely to abandon her, and he is more likely to cheat on her.

But what is the upside? I think it is a combination of two things. On the one hand there are secondary characteristics to the bad boy, namely confidence, strength, dominance and miscellaneous other "alpha" attributes. These are attractive because the driving force here is sexual attraction, and the innate desire of the woman to pass on to her progeny genes that will maximize their reproductive capabilities (that, after all, at the bottom line in our deep inner brain, is that driving force in sexual congress). However, there is a secondary thing, namely that the non alpha male, who offers the other side to reproductive success, namely the facility and resources to nurture the child, is readily available to her since the majority of non alpha males have less sexual access, and so must be willing to commit (and may prefer to commit) to get that access.

However, this all operates at a far lower level than the conscious mind. No doubt we have all experienced a longing for a partner that we know on a rational level is a very poor choice, but our deep inner longings often drive us off course.

Since this alpha male/beta male/female thing can be controversial, let me suggest that another example of our biological urges overcoming our rational is one we all encounter -- namely the urge to eat food that we know is unhealthy for us. It can be argued that food considered unhealthy is only unhealthy in the high availability, long living, famine free world we inhabit. but our bodies are not evolved for such an environment and so crave foods more suited to the ancestral human condition. Here again, our biology drives us away from rational choices to satisfy something much deeper. 

In Conclusion

So I have offered here two chains where rationality is not the main goal. On the one hand the secondary benefits of an irrational choice might outweigh the benefits of a rational choice. One the other our assumption that rationality drives our decisions is often at cross purposes with the real deep biological drives that influence a lot more of our choices that we might imagine.

comment by Adam Zerner (adamzerner) · 2020-11-15T23:36:04.070Z · LW(p) · GW(p)

they are evidently beneficial, or else they would not exist

There are many examples of things about the design of humans that exist but are not beneficial.

our assumption that rationality drives our decisions is often at cross purposes with the real deep biological drives that influence a lot more of our choices that we might imagine

Rationality does not assume this. It very much acknowledges that humans are flawed and have irrational drives.

Let me offer you two specific examples:

Both seem like legitimate beliefs to me, where legitimate means that there is proper anticipation control [? · GW], or a prediction being made.

1 comment

Comments sorted by top scores.

comment by Gordon Seidoh Worley (gworley) · 2020-11-20T03:38:32.049Z · LW(p) · GW(p)

I think an important issue to keep in mind about fake beliefs is that they may be locally useful but not globally useful, i.e. they might help you for a while but you'll eventually have to unlearn them to get out of local maxima.