Evolution, bias and global risk
post by Giles · 2011-05-23T00:32:08.087Z · LW · GW · Legacy · 10 commentsContents
The bystander effect The affect heuristic Trolley problems Global risk None 10 comments
Sometimes we make a decision in a way which is different to how we think we should make a decision. When this happens, we call it a bias.
When put this way, the first thing that springs to mind is that different people might disagree on whether something is actually a bias. Take the bystander effect. If you're of the opinion that other people are way less important than yourself, then the ability to calmly stand around not doing anything while someone else is in danger would be seen as a good thing. You'd instead be confused by the non-bystander effect, whereby people (when separated from the crowd) irrationally put themselves in danger in order to help complete strangers.
The second thing that springs to mind is that the bias may exist for an evolutionary reason, and not just be due to bad brain architecture. Remember that evolution doesn't always produce the behavior that makes the most intuitive sense. Creatures, including presumably humans, tend to act in a way as to maximize their reproductive success; they don't act in the way that necessarily makes the most intuitive sense.
The statement that humans act in a fitness-maximizing way is controversial. Firstly, we are adapted to our ancestral environment, not our current one. It seems very likely that we're not well adapted to the ready availability of high-calorie food, for example. But this argument doesn't apply to everything. A lot of the biases appear to describe situations which would exist in both the ancestral and modern worlds.
A second argument is that a lot of our behavior is governed by memes these days, not genes. It's certain that the memes that survive are the ones which best reproduce themselves; it's also pretty plausible that exposure to memes can tip us from one fitness-maximizing behavioral strategy to another. But memes forcing us to adopt a highly suboptimal strategy? I'm sceptical. It seems like there would be strong selection pressure against it; to pass the memes on but not let them affect our behavior significantly. Memes existed in our ancestral environments too.
And remember that just because you're behaving in a way that maximizes your expected reproductive fitness, there's no reason to expect you to be consciously aware of this fact.
So let's pretend, for the sake of simplicity, that we're all acting to maximize our expected reproductive success (and all the things that we know lead to it, such as status and signalling and stuff). Which of the biases might be explained away?
The bystander effect
Eliezer points out:
We could be cynical and suggest that people are mostly interested in not being blamed for not helping, rather than having any positive desire to help - that they mainly wish to escape antiheroism and possible retribution.
He lists two problems with this hypothesis. Firstly, that the experimental setup appeared to present a selfish threat to the subjects. This I have no convincing answer to. Perhaps people really are just stupid when it comes to fires, not recognising the risk to themselves, or perhaps this is a gaping hole in my theory.
The other criticism is more interesting. Telling people about the bystander effect makes it less likely to happen? Well, under this hypothesis, of course it would. The key to not being blamed is to formulate a plausible explanation; the explanation "I didn't do anything because no-one else did either" suddenly sounds a lot less plausible when you know about the bystander effect. (And if you know about it, the person you're explaining it to is more likely to as well. We share memes with our friends).
The affect heuristic
This one seems quite complicated and subtle, and I think there may be more than one effect going on here. But one class of positive-affect bias can be essentially described as: phrasing an identical decision in more positive language makes people more likely to choose it. The example given is "saving 150 lives" versus "saving 98% of 150 lives". (OK these aren't quite identical decisions, but the difference in opinion is more than 2% and goes in the wrong direction). Apparently putting in the word 98% makes it sound more positive to most people.
This also seems to make sense if we view it as trying to make a justifiable decision, rather than a correct one. Remember, the 150(ish) lives we're saving aren't our own; there's no selective pressure to make the correct decision, just one that won't land us in trouble.
The key here is that justifying decisions is hard, especially when we might be faced with an opponent more skilled in rhetoric than ourselves. So we are eager for additional rhetoric to be supplied which will help us justify the decision we want to make. If I had to justify saving 150 lives (at some cost), it would honestly never have occurred to me to phrase it as "98% of 153 lives". Even if it had, I'd feel like I was being sneaky and manipulative, and I might accidentally reveal that. But to have the sneaky rhetoric supplied to me by an outside authority, that makes it a lot easier.
This implies a prediction: when asked to justify their decision, people who have succumbed to positive-affect bias will repeat the postive-affective language they have been supplied, possibly verbatim. I'm sure you've met people who quote talking points verbatim from their favorite political TV show; you might assume the TV is doing their thinking for them. I would argue instead that it's doing their justification for them.
There is a class of people, who I will call non-pushers, who:
- would flick a switch if it would cause a train to run over (and kill) one person instead of five, yet
- would not push a fat man in front of that train (killing him) if it could save the five lives
So what's going on here? Our feeling of shouldness is presumably how social pressure feels from the inside. What we consider right is (unless we've trained ourselves otherwise) likely to be what will get us into the least trouble. So why do non-pushers get into less trouble than pushers, if pushers are better at saving lives?
It seems pretty obvious to me. The pushers might be more altruistic in some vague sense, but they're not the sort of person you'd want to be around. Stand too close to them on a bridge and they might push you off. Better to steer clear. (The people who are tied to the tracks presumably prefer pushers, but they don't get any choice in the matter). This might be what we mean by near and far in this context.
Another way of putting it is that if you start valuing all lives equally, and not put those closest to you first, then you might start defecting in games of reciprocal altruism. Utilitarians appear cold and unfriendly because they're less worried about you and more worried about what's going on in some distant, impoverished nation. They will start to lose the reproductive benefits of reciprocal altruism and socialising.
Global risk
In Cognitive Biases Potentially Affecting Judgment of Global Risks, Eliezer lists a number of biases which could be responsible for people's underestimation of global risks. There seem to be a lot of them. But I think that from an evolutionary perspective, they can all be wrapped up into one.
Group Selection doesn't work. Evolution rewards actions which profit the individual (and its kin) relative to others. Something which benefits the entire group is nice and all that, but it'll increase the frequency of the competitors of your genes as much as it will your own.
It would be all to easy to say that we cannot instinctively understand existential risk because our ancestors have, by definition, never experienced anything like it. But I think that's an over-simplification. Some of our ancestors probably have survived the collapse of societies, but they didn't do it by preventing the society from collapsing. They did it by individually surviving the collapse or by running away.
But if a brave ancestor had saved a society from collapse, wouldn't he (or to some extent, she) become an instant hero with all the reproductive advantage that affords? That would certainly be nice, but I'm not sure the evidence backs it up. Stanislav Petrov was given the cold shoulder. Leading climate scientists are given a rough time, especially when they try and see their beliefs turned into meaningful action. Even Winston Churchill became unpopular after he helped save democratic civilization.
I don't know what the evolutionary reason for hero-indifference would be, but if it's real then it pretty much puts the nail in the coffin for civilization-saving as a reproductive strategy. And that means there's no evolutionary reason to take global risks seriously, or to act on our concerns if we do.
And if we make most of our decisions on instinct - on what feels right - then that's pretty scary.
10 comments
Comments sorted by top scores.
comment by timtyler · 2011-05-23T08:41:49.265Z · LW(p) · GW(p)
A second argument is that a lot of our behavior is governed by memes these days, not genes. It's certain that the memes that survive are the ones which best reproduce themselves; it's also pretty plausible that exposure to memes can tip us from one fitness-maximizing behavioral strategy to another. But memes forcing us to adopt a highly suboptimal strategy? I'm sceptical.
There are memes to die for - as Dennett loves to point out in his many lectures on the topic.
However, memes are more likely to sterilise than kill. Memes are widely blamed for the demographic transition. They are why there are so few kids in Japan.
It seems like there would be strong selection pressure against it; to pass the memes on but not let them affect our behavior significantly. Memes existed in our ancestral environments too.
Sure, but the memes coevolve with the genes - and they evolve faster. Today they are much more dense and numerous and potent than they were for our ancestors. We do have an evolved memetic immune system - but it is having a hard time keeping up. DNA-evolution can't just magic defenses against computer games and interactive pornography into existence.
The memetic model is of apes with infected brains - infections that dramatically alter their behaviour. Memes are what make us different from the most primitive cave-men.
Replies from: Giles↑ comment by Giles · 2011-05-24T02:57:49.252Z · LW(p) · GW(p)
There's strong selection pressure on us to develop immunity to harmful memes. There's no selection pressure on the meme to be harmful to us. So we win, don't we?
(It's different with parasites; there's no selection pressure for them to harm us as such, but they are competing with us for resources, which will ultimately harm us. The only resource that memes require is talking time).
Of course memes can flip us from one adaptive (or nearly adaptive) behavior to another. I think it's pretty clear they do at least that much. So I'm left trying to explain why committing suicide and having fewer children might be adaptive behavior.
Suicide may be regarded as failed parasuicide. Self harm and attempted suicide is seen as a cry for help, but what if the person is in a society where help actually gets provided? They may end up more likely to reproduce. (But it's a very costly signal so that person would have to be pretty desperate).
The demographic transition is easier to explain. Societies that are peaceful, highly wealthy and with high population density are not stable; they are prone to collapse due to invasion or using up a resource. And if you're faced with possible societal collapse, you're probably better off producing fewer children and investing more in each one.
Replies from: timtyler↑ comment by timtyler · 2011-05-24T06:22:03.069Z · LW(p) · GW(p)
There's strong selection pressure on us to develop immunity to harmful memes. There's no selection pressure on the meme to be harmful to us. So we win, don't we?
(It's different with parasites; there's no selection pressure for them to harm us as such, but they are competing with us for resources, which will ultimately harm us. The only resource that memes require is talking time).
Memes require resources too. Harmful memes are just like other pathogens - so they are selected for increased virulence, increased ability to compromise host immunity, increased ability to divert host resources into the production and distribution of memes - and so on.
So I'm left trying to explain why committing suicide and having fewer children might be adaptive behavior.
Not a very promising approach. Suicide is very unlikely to be adaptive - and it is pretty well established that the demographic transition is also maladaptive:
The continuing decline of fertility to below replacement levels in many parts of Europe (both richer and poorer parts) is unlikely ever to find an adaptive explanation.
- Boyd and Richerson 2005, p.173.
↑ comment by Giles · 2011-05-24T20:21:21.058Z · LW(p) · GW(p)
I'm updating: I concede I was likely talking nonsense regarding suicide memes. Memes are sort of like viruses in that they really want to spread themselves but don't necessarily require that much in way of resources from the host. Yet deadly viruses exist.
So I think I'd expect deadly memes to spread in "outbreaks" and "epidemics", like viruses do, but not to hang around a population for generations gradually sapping everybody's reproductive ability.
I'll try and dig up Boyd and Richerson. Do you know if they address my particular hypothesis (collapse precursor?) I couldn't see any mention of it with a quick googling - are such hypotheses so easy to generate that there are dozens of them out there and people only address the leading ones?
Replies from: timtyler↑ comment by timtyler · 2011-05-24T22:19:23.555Z · LW(p) · GW(p)
So I think I'd expect deadly memes to spread in "outbreaks" and "epidemics", like viruses do, but not to hang around a population for generations gradually sapping everybody's reproductive ability.
The memes that gradually sap the reproductive ability.of many are not "deadlly". They are more like cold viruses, and persistent viral infections.
Boyd and Richerson don't look at your collapse hypothesis. They argue that the number of kids is so small in many cases that it can't possibly be adaptive.
comment by DanielLC · 2011-05-23T22:43:16.352Z · LW(p) · GW(p)
I don't get your response to the trolley problem. Someone who flips the switch now has five people in his debt rather than one, and while they could kill you, they're five times more likely to save your life.
You talked about closeness, but I've never heard a version of the trolley problem where you explicitly knew the guy you sacrifice.
Replies from: Giles↑ comment by Giles · 2011-05-24T02:35:58.057Z · LW(p) · GW(p)
I'm not sure of the actual game theory of the trolley scenario - I think I was more assuming that people would follow a "help near people, don't help far people" heuristic.
"Far" people - people who are physically distant or not part of your community - have a strong incentive to run away if they owe you a big debt, rather than paying the debt off. You're more likely to see "near" people again in the future, and both be forced to resume the iterated cooperate/punish game.
But as you point out, there needs to be more to "nearness" than this, as the problem doesn't require the fat man to be someone you know. I think it comes down to the role of third-parties, who presumably play some game-theoretic role in regulating these cooperate/punish games.
What do you think a passer by would do if he saw you throw someone off a bridge? How sympathetic do you think he would be to your story of the train and the people tied to the tracks?
Replies from: DanielLC↑ comment by DanielLC · 2011-05-24T04:07:55.377Z · LW(p) · GW(p)
Why wouldn't he be sympathetic? Someone like me is five times as likely to save his life than end it.
Replies from: Giles