Brain structure and the halo effect

post by saph · 2012-02-18T15:10:08.331Z · LW · GW · Legacy · 18 comments

Contents

  Introduction
  Brain structure and the halo effect
  Conclusion
None
18 comments

Introduction

When people on LW want to explain a bias, they often turn to Evolutionary psychology. For example, Lukeprog writes

Human reasoning is subject to a long list of biases. Why did we evolve such faulty thinking processes? Aren't false beliefs bad for survival and reproduction?

I think that ''evolved faulty thinking processes'' is the wrong way to look at it and I will argue that some biases are the consequence of structural properties of the brain, which 'cannot' be affected by evolution.

Brain structure and the halo effect

I want to introduce a simple model, which relates the halo effect to a structural property of the brain. My hope is that this approach will be useful to understand the halo effect more systematically and shows that thinking in evolutionary terms is not always the best way to think about certain biases.

One crucial property of the brain is that it has to map a (essentially infinite) high-dimensional reality onto a finite low-dimensional internal representation. (If you know some Linear Algebra, you can think of this as a projection from a high-dimensional space into a low-dimensional space.) This is done more or less automatically by the limitation of our senses and brain's structure as a neural network.

Neural network (Wikipedia)

An immediate consequence of this observation is that there will be many states of the world, which are mapped to an almost identical inner representation. In terms of computational efficiency it makes sense to use overlapping set of neurons with similar activation level to represent similar concepts. (This is also a consequence of how the brain actually builds representations from sense inputs.)

Now compare this to the following passage from here.

The halo effect is that perceptions of all positive traits are correlated. Profiles rated higher on scales of attractiveness, are also rated higher on scales of talent, kindness, honesty, and intelligence.

This shouldn't be a surprise, since 'positive' ('feels good') seems to be one of the evolutionary hard-wired concepts. Other concepts that we acquire during our life and associate with positive emotions, like kindness and honesty are mapped to 'nearby' neural structures. When one of those mental structures is activated, the 'closed ones' will be activated to a certain degree as well.

Since we differentiate concepts more when we are learning about a subject, the above reasoning should imply that children and people with less education in a certain area should be more influenced by this (generalized) halo effect in that area.

Conclusion

Since evolution can only modify the existing brain structure but cannot get away from the neural network 'design', the halo effect is a necessary by-product of human thinking. But the degree of 'throwing things in one pot' will depend on how much we learn about those things and increase our representation dimensionality.

My hope is that we can relief evolution from the burden of having to explain so many things and focus more on structural explanations, which provide a working model for possible applications and a better understanding.

 

PS: I am always grateful for feedback!

18 comments

Comments sorted by top scores.

comment by Dmytry · 2012-02-19T15:06:12.285Z · LW(p) · GW(p)

There's a certain issue.

A revision control system I use (GIT) uses 128 bit hashes to identify a much longer piece of program code. There are very many pieces of program code that correspond to same hash.

I don't expect it to ever encounter collision, though. Just because something maps large space to a smaller space, doesn't mean any collisions will actually happen. Even a couple hundred bits is enough to define space so vast, that you can map anything you encounter in your life to it, and never see a collision. Not that our brains necessarily work like this. But they can in principle avoid collisions.

For the halo effect you're speaking of, it is the case that positive qualities weakly correlate - at least the good looks and intelligence do, then the intelligence generally correlates with niceness in so much as intelligence prevents grossly un-nice behaviour that hurts everyone including that person. Still could usually be a fallacy, of course, some sort of signal leakage between 'good looks' and 'good somethingelse'.

I'd say, people just tend to assume by default there's some correlation between two things - either they assume it is positive (so pretty, must be nice), or they assume it is negative (so pretty, must be evil or spoiled or the like), and just a few people assume it is zero by default.

Replies from: army1987, Kenoubi
comment by A1987dM (army1987) · 2012-02-19T21:07:06.486Z · LW(p) · GW(p)

But in the human brain, input is noisy, so you don't want to exclusively match identical experiences -- also experiences that are close enough, whereas if you flip one bit in the input of a hash function you'll flip (in average) half the bits in the output.

comment by Kenoubi · 2012-02-19T17:30:38.901Z · LW(p) · GW(p)

If the brain avoided collisions in the way you describe, it would utterly fail at its function. The brain must be able to access the information it has about similar situations to make judgments and decisions about the current one. Looking up that information must make use of some data in common between the current situation and whatever representation the brain has of other similar situations, or there would be no way to locate or identify that information.

So at the description level of "the brain is a computing device", this seems plausible, but considering what the brain actually does, I don't see how it could work. It could use a hybrid of hash functions and structural similarities at different levels, and maybe it does. But the fact that we can confuse two different people who have some attributes in common, or even whose names are similar but not the same, seems like evidence against that to me.

Replies from: Dmytry
comment by Dmytry · 2012-02-19T22:18:09.322Z · LW(p) · GW(p)

The point is not that it necessarily happens, the point is that if the larger space is mapped to a smaller space, that's by itself doesn't mean there will be [unwanted] collisions. The very same software could do lower-case string matching which 'confuses' lower and upper case, using the hashes.

The collisions between multiple good qualities - well that does not even happen for every person on the earth in the way that is outlined in the article - there definitely are people who think that e.g. pretty people must be stupid, which is btw more wrong than pretty people being smarter (due to health's effect on both intelligence and look). It could well be that people are falling for some sort of just world fallacy in one or other way, rather than literally mixing up the good looks and intelligence.

edit: and note all the cultural priming. The heros are smart, nice, handsome, brave, et cetera. The villains are bad on all counts. We are constantly watching biased data, and perhaps are inferring some correlation from this data. I think, though, there are people who believe good looks correlate with stupidity. That is the default assumption about women.

Replies from: saph
comment by saph · 2012-02-20T10:42:02.219Z · LW(p) · GW(p)

I think you are right that there don't have to be collisions (in practice) if the representation space is big enough and has sufficient high dimension. On the other hand there is a metric aspect involved in the way the brain maps its data, which is not present in hash code (as far as I know). This reduces the effective dimension of the brain dramatically and I would guess that it is nowhere near 128 (as in your hash example) for the properties 'good looking', 'honest', etc. It would be an interesting research project to find out.

I think that the cultural aspect you mention might play a significant role. As I wrote in another comment, my goal here was not to give a full explanation of the halo effect... But I don't think that your 'beautiful women are stupid' example undermines the general idea, since for those people 'beauty' doesn't seem like a 'positive' concept and we wouldn't expect it to correlate with intelligence therefore. But I am not defending the 'halo effect' anyway. I chose it as an example to highlight the main idea and I might as well have chosen another bias.

Replies from: Dmytry
comment by Dmytry · 2012-02-21T17:28:07.033Z · LW(p) · GW(p)

Well, the beauty is positive quality for men who believe prettier women are stupider. One need to be careful not to start redefining positive qualities as those that correlate positively with each other.

So what would be your other example of halo effect? USA tends to elect taller people for presidents, yet I don't think many have trouble with concept that extreme tallness correlates negatively with health. I can't really think of much halo effects, apart from other effects like e.g. if you pick someone based on one quality you rationalize other qualities as good, or if you are portraying other people you'll portray those you dislike as all around negative and those you like as all around positive (which will bias anyone who's relying on this to infer correlations).

I think the bigger issue is when we prepare problems for effective reasoning. Every number should be a statistical distribution of it's possible values, yet it's very unwieldy to compute and we assign a definite number, or normal distribution. That is usually harmless but can result in gross error. There's whole spectra of colours, but nearby colours are confused, and there's artificial gradation of colours into bins. That kind of thing.

Replies from: saph
comment by saph · 2012-02-21T19:16:18.718Z · LW(p) · GW(p)

So what would be your other example of halo effect?

I haven't said that I have other examples of the halo effect, but examples of other biases which can also be explained by properties of how the brain processes sense inputs.

Replies from: Dmytry
comment by Dmytry · 2012-02-22T08:55:44.929Z · LW(p) · GW(p)

Ahh. Well i think you can explain a great deal of biases by the brain simply not being all that powerful and how it has to rely on various simple strategies rather than direct foresight and maximization of some foreseen quantity.

You can't really expect a person who can't do monty hall problem, to do proper probabilities - and that's the majority of people - and then, you can't expect a person who can do monty hall problem, not to pick up various cognitive and behavioural habits from those who can't. Then, why people can't do monty hall correctly, is it some universal failure in the brain organization? Well, smart individual would figure it out, most individuals can be taught methods for figuring it out.

comment by Kenoubi · 2012-02-20T16:30:22.917Z · LW(p) · GW(p)

I think that ''evolved faulty thinking processes'' is the wrong way to look at it and I will argue that some biases are the consequence of structural properties of the brain, which 'cannot' be affected by evolution.

The structure can be affected by evolution, it's just too hard (takes too many coordinated mutations) to get to a structure that actually works better. I think you recognize this by your use of scare quotes, but you would be better off stating it explicitly. This is the flip side of the arguments I think you're alluding to, that the faulty thinking was actually beneficial in the EEA.

There must be an evolutionary explanation for the properties of the brain, but that doesn't mean we need to actually figure out that evolutionary explanation to understand the current behavior. Just like there must be an explanation in terms of physics, but trying to analyze every particle will clearly get us nowhere.

In fact, if you can find an explanation of a phenomenon in terms of current brain structure, I think that screens off evolutionary explanations as mere history (as long as you've really verified that the structure exists and explains the phenomenon).

I do think we're getting sidetracked by your halo effect example, though -- it might be useful to give three or four examples to avoid this (although if each one has a different explanation, that might substantially increase the effort of presenting your idea).

Replies from: saph
comment by saph · 2012-02-20T18:46:55.437Z · LW(p) · GW(p)

This is the flip side of the arguments I think you're alluding to, that the faulty thinking was actually beneficial in the EEA.

Yes. Some people I know, observe fact X about human behaviour and then conclude that it had to be beneficial for survival, for otherwise evolution would have eradicated X.

I do think we're getting sidetracked by your halo effect example, though -- it might be useful to give three or four examples to avoid this (although if each one has a different explanation, that might substantially increase the effort of presenting your idea).

My original plan was to give several examples of biases with different explanations, but since this is my first attempt to do something productive on LW, I decided to write a short article and get some feedback first. So, thanks for your suggestions!

comment by asr · 2012-02-19T18:27:02.642Z · LW(p) · GW(p)

I think that formulating this in terms of linear algebra is not always as illuminating as explaining it in terms of structure.

The way neural nets work, related concepts get wired together, and therefore cross-activate each other. To re-use your example, because we often activate various positive things alongside the more general notion of positiveness, you'd expect some coupling even between unrelated positive concepts.

Replies from: saph
comment by saph · 2012-02-19T20:34:08.081Z · LW(p) · GW(p)

Thanks for the feedback!

The reference to linear algebra should only show, that there have to be states which are mapped to similar representations, even if we don't know a priory which ones will be correlated.

But if we now look closer at the structure of the brain as a neural network and the learning mechanisms involved, then I think that we could expect positive concepts to be correlated by cross activation, as you explained.

The point of the article is not to come up with a perfect explanation for how the halo effect is actually caused, but to show that there doesn't have to be an evolutionary reason for it to evolve, besides the 'obvious' one that pwno mentions in his comment.

Replies from: asr
comment by asr · 2012-02-19T22:02:27.697Z · LW(p) · GW(p)

Yes. I thought you were making an interesting and useful point. I was offering you an alternate formalism to explain the phenomenon, not expressing a disagreement with anything you wrote.

comment by Will_Newsome · 2012-02-19T01:19:08.954Z · LW(p) · GW(p)

This is perhaps an example of when understanding a formal cause, in this case logical truths about certain machine learning architectures, is more enlightening than understanding an efficient cause, in this case contingent facts about evolutionary dynamics. It is generally the case that formal-causal explanations are more enlightening than efficient-causal explanations, but efficient-causal explanations are generally easier to discover, which is why the sciences are so specialized for understanding efficient causes. There are sometimes trends towards a more form-oriented approach, e.g. cybernetics, complexity sciences, aspects of evo-devo, and so on, but they're always on the edge of what is possible with traditional scientific methods and thus their particular findings are unfortunately often afflicted with an aura of unrigor.

Of note is that the only difference between "causal" decision theory and "timeless" decision theory is that the latter's description emphasizes the taking-into-account of formal causes, which is only implicit in any technically-well-founded causal decision theory and is for some unfathomable reason completely ignored by academic decision theorists. (If you get down to the level of an actually formalized decision theory then you're working with Markovian causality, where as far as I can discern CDT and TDT are no different.)

Replies from: saph
comment by saph · 2012-02-19T20:38:08.493Z · LW(p) · GW(p)

I am still reading through the older posts on LW and haven't seen CDT ot TDT yet (or haven't recognized it), but when I do, I will reread your comment and will hopefully understand how the second part of the comment is connected to the first...

Replies from: Will_Newsome
comment by Will_Newsome · 2012-02-19T20:48:00.348Z · LW(p) · GW(p)

This is the LW decision theory portal. If you're reading through Eliezer's sequences I don't think there's much discussion about the foundations of decision theory there.

comment by pwno · 2012-02-19T00:31:36.126Z · LW(p) · GW(p)

I will argue that some biases are the consequence of structural properties of the brain, which 'cannot' be affected by evolution

The biases are indirectly affected by evolution. The brain evolved "faulty thinking" because natural constraints put a premium on accuracy - especially when sometimes-accurate beliefs are sufficient.

comment by derfner · 2012-04-03T22:23:47.913Z · LW(p) · GW(p)

I'm not a mathematician or theorist, but I couldn't figure out how else to contact you, so I hope you'll forgive my sending you a message via comment--I'm writing a nonfiction book that has a section dealing with the halo effect, and I'm interested in looking at it from an evolutionary and/or--as you suggest is more appropriate--structural standpoint. However, I'm ashamed to say that I understand very little of what you've written here--enough to find it interesting and want to know more, but not enough to know what it is that you're actually saying. I see from your profile that you live in Germany--would you mind my emailing you and asking you a few questions? If you'd be okay with that, please send me an email to joel@joelderfner.com . Thanks so much!