De-Centering Bias

post by Chris_Leong · 2017-10-18T23:24:01.786Z · LW · GW · 10 comments

Contents

10 comments

Summary: A perspective that synthesises biases with other considerations incl. game theory, virtue ethics and knowledge of your limitations.

Intro: Adopting a way of thinking in which you are aware of your own biases is clearly important and one of the areas of rationality that is most based solid evidential grounds. However, this also needs to be reconciled with evolutionary psychology arguments that are psychological functioning has evolved to maximise our reproductive fitness, a big part of which is survival. This post attempts to synthesise these two views. I would also like to suggest a new term, De-Centering Bias to describe this technique given how it marks a shift from bias being the central explanation, to bias having to share the stage with other considerations. The best way to understand this is by example:

(This post was originally title Post-Bias Thinking (or Post-Bias-ism). Unsurprisingly, this was controversial, so I decided to rename it.)

Example 1, one of the most famous experiments in psychology is the Stanford Marshmallow Experiment, which was originally interpreted as showing that children who could resist eating one marshmallow now for the promise of two later tended to have better life outcomes. However, later interpretations showed that this could actually be rational for children in some environments where such promises were not reliable and that the presence of such an environment could explain these results by being a common cause of both effects.

Example 2, often revenge will make people take actions that harm both them and the other person. If we model an actor as a self-interested rational agent then we will come to the conclusion that they are being "irrational". On the other hand, if an actor is willing to go to such extreme lengths to punish someone who wrongs them, then there is a strong incentive not to wrong them in the first place (Scott has argued that it could be considered charitable because it also created a disincentive within the wider community). In the Most Convenient World, the actor will gain the benefits of such a threat existing, whilst never having to carry out their threat.

Example 3, the sunk cost fallacy is the tendency of humans to want to continue a project that they have invested a lot of resources (time, money, effort) into, even if the project is not valuable enough to be worth the resources required to finish the project. When discussing this fallacy we need to be aware that human are not rational agents in that we will often be too lazy to engage in activities that would be worthwhile. Wanting to continue projects in which we have invested large amounts of time in allows us to counter this tendency. So if we were able to press a button and remove sunk cost considerations from our brain, I would not be surprised if this was to make us less effective as an agent (as Elizier says, it is dangerous to be half a rationalist, link, there's a better link somewhere, but I can't find it). But further than that, taking a Virtue Ethics approach, every time you complete a project, you become more like the kind of person who completes projects, so sometimes it might be worth completing a project just so that you completed it, rather than for the actual value the project provides. In this case, this bias seems to make us more rational by mitigating a different way in which we are irrational.

De-centering Bias is not:

The belief that "Bias-Centered Thinking" is wrong all or even most of the time, as opposed to being a challenge that forces us to refine our thoughts further.

The belief that all biases have benefits attached. Sometimes attributes evolve merely as side effects. Sometimes they harm us, but not enough to affect our reproductive success (h/t Alwhite).

Limited to game theoretic considerations. See the discussion of the sunk cost fallacy above.

In conclusion, De-Centering Thinking is incredibly simple. All I'm asking you to do is to stop and pause for a second after you've been told that something is a bias and think about whether there are any countervailing considerations. I believe that this is an important area to examine as you could probably fill an entire sequence by expanding this analysis to different biases.

Suggestions for Further Discussion: What is something that is generally considered a bias or fallacy, but where you believe that there are also other considerations?

10 comments

Comments sorted by top scores.

comment by Gordon Seidoh Worley (gworley) · 2017-10-19T01:49:27.851Z · LW(p) · GW(p)

It seems generally the case that we should expect biases are doing something useful because otherwise they would have been selected against, even if they have errors on the margins.

comment by the gears to ascension (lahwran) · 2017-10-19T07:09:38.302Z · LW(p) · GW(p)

Would you be willing to rename to remove the "post-"? Eg, "Thinking without 'bias'". I want to push back against the non-epistemic demand to believe something that the prefix "post-" creates.

Replies from: Chris_Leong
comment by Chris_Leong · 2017-10-19T13:01:05.091Z · LW(p) · GW(p)

As I said, I'm not entirely happy with the name, but "Thinking without 'bias'" wouldn't work for me. It works well as the title of a post, but not as the title of a concept or way of thinking. Probably the best alternative I have at the moment is, "Bias contingency", because the argument is really, that depending on contingent circumstances, what appears to be a bias may not actually be a bias. This is more accurate, but also a bit of a mouthful.

EDIT: After more thought I came close to suggesting Bias Critical Analysis, which could be shortened to being Bias Critical. But the term is starting to grow on me. I know that Post is sometimes used to imply the the non-Post view is outdated, but technically it just means something that comes after. We started off with Bias-Centered Thinking on LW, but now that we have learned more we can take a more nuanced view that asks this Bias-Centered Thinking to share the stage with other perspectives. Hence Post.

EDIT 2: I finally ended up renaming it to De-Centering Bias. I feel that this name somehow manages to be both accurate and simple enough to remember.

comment by zlrth · 2017-11-07T03:21:00.412Z · LW(p) · GW(p)
(as Elizier says, it is dangerous to be half a rationalist, link, there's a better link somewhere, but I can't find it)

This might be it: http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/

Excerpt:

And you do not warn them to scrutinize arguments they agree with just as hard as they scrutinize incongruent arguments for flaws.  So they have acquired a great repertoire of flaws of which to accuse only arguments and arguers who they don't like.  This, I suspect, is one of the primary ways that smart people end up stupid. 

(it also mentions that it's dangerous to be half a rationalist.

comment by habryka (habryka4) · 2017-10-19T02:52:30.242Z · LW(p) · GW(p)

Maybe I am just generally averse to prepending "post" to anything, partially because of its social implications, but I really dislike the term "post-biases thinking".

Other than that, great post!

Replies from: Chris_Leong
comment by Chris_Leong · 2017-10-19T03:02:50.739Z · LW(p) · GW(p)

As I said, I'm not the biggest fan of the term, but I believe having a term is important.

For example, the first time I saw the New Interpretation of the Marshmallow Test and Revenge as a Charitable Act, I was super impressed by these ideas and thought, "Wow, I never would have thought of that!". However, once you coin a general concept like this it becomes much more likely that you will end up being able to come up with these thoughts on your own.

EDIT: I decided to rename this post to De-Centering Bias to make it less controversial.

comment by TAG · 2018-05-17T10:24:35.374Z · LW(p) · GW(p)

it marks a shift from bias from being the central explanation, to bias having to share the stage with other considerations.

The second "from" seems to be a mistake.

Replies from: Chris_Leong
comment by Chris_Leong · 2018-05-21T00:34:13.873Z · LW(p) · GW(p)

Fixed

comment by alwhite · 2017-10-19T04:38:22.754Z · LW(p) · GW(p)

To me, this is setting up a false dichotomy; that being, rational thought vs biased thought. Or said differently, rational thought automatically chooses against what a bias would choose. Said even more differently, thinking rationally isn't taking the opposite of bias. I think bias and thinking are happening on different planes and interact in weird ways.

As an aside, I find it extremely strange to say "aware of one's own biases". Bias is all about decisions that happen, seemingly outside our awareness. Decisions we make for what we think is one reason but is really a different reason, and our stated reason is a mere rationalization. If I were aware of my bias, I would take steps to not make that decision and would therefore no longer be biased, except in ways that I was currently unaware.

An example: let's say I'm biased against hiring women. In the company I hire for, I hire men way more than I do women, if I hire women at all. I could even be outspoken about this and claim that I don't believe women are as effective workers as men. This statement does not mean I'm aware of a bias, it is a declaration of belief. For me to be aware of my bias means to acknowledge that my belief is incorrect and leading to negative outcomes.

I can't effectively model a person who is aware of their own wrong belief (bias) and still chooses to believe it. (I'm aware I want to buy that shirt because the store placed it at the front, so I'm going to buy it anyway for that reason) I just don't see that happening. I think we can only acknowledge that bias might be happening and needs tested.

Back to original train of thought with this understanding of bias. Bias comes to conclusions based on processes that are not the evaluation of available data. Confirmation bias is seeking only a subsection of the data available. The reason we do this is because convincing ourselves we are right feels good. The decision here was made not by thinking, but by feeling good. Confirmation bias just might lead to the right conclusion. Bias does not guarantee rightness or wrongness and that's the issue.

As to the survivorship of biases, it doesn't really make sense to believe that only useful things survive and all negative things die off. Surviving is about lasting long enough to reproduce which means only items with severe and quick consequences die off. Benign errors or slowly acting errors can easily survive while serving no beneficial function. Many of the fallacies fall in this category. They don't lead to immediate death, therefore there's no mechanism to get rid of them. Sunk cost is wasted effort, not seeing death in that. Revenge either happens on a level less than death, or so rarely that it doesn't impede population growth. The marshmallow test doesn't look like a bias, just experiments. It's never a good idea to take a single study as truth, but unfortunately the social sciences do this a lot.

I think we also need to consider the dangerousness of the environment. Really harsh environments don't tolerate many mistakes and so organisms that live there are much simpler. More complex organisms have way more opportunities for mistakes and you find them in gentler environments. This feeds back into surviving long enough to reproduce. Errors and mistakes don't have to serve a function to exist, they can simply be tolerated in a gentle environment.

Replies from: Chris_Leong
comment by Chris_Leong · 2017-10-19T07:08:00.821Z · LW(p) · GW(p)

"To me, this is setting up a false dichotomy" - I set up this dichotomy as a way of identifying a middle path. To clear "post-bias thinking" is not so much about rejecting rationalism's tendency to think in terms of bias, but about realising that there are often counter-veiling considerations.

"If I were aware of my bias, I would take steps to not make that decision and would therefore no longer be biased, except in ways that I was currently unaware." - Not necessarily. You might end up overcorrecting if you tried that.

"As to the survivorship of biases, it doesn't really make sense to believe that only useful things survive and all negative things die off" - Indeed, but it is useful to consider whether a bias might also have benefits attached. Because sometimes we might find that we were too quick to judge.