Why Support the Underdog?

post by Scott Alexander (Yvain) · 2009-04-05T00:01:29.756Z · LW · GW · Legacy · 102 comments

One of the strangest human biases is the almost universal tendency to support the underdog.

I say "human" because even though Americans like to identify themselves as particular friends of the underdog, you can find a little of it everywhere. Anyone who's watched anime knows the Japanese have it. Anyone who's read the Bible knows the Israelites had it (no one was rooting for Goliath!) From mythology to literature to politics to sports, it keeps coming up.

I say "universal" because it doesn't just affect silly things like sports teams. Some psychologists did a study where they showed participants two maps of Israel: one showing it as a large country surrounding the small Palestinian enclaves, and the other showing it as a tiny island in the middle of the hostile Arab world. In the "Palestinians as underdogs" condition, 55% said they supported Palestine. In the "Israelis as underdogs" condition, 75% said they supported Israel. Yes, you can change opinion thirty points by altering perceived underdog status. By comparison, my informal experiments trying to teach people relevant facts about the region's history changed opinion approximately zero percent.

(Oh, and the Israelis and Palestinians know this. That's why the propaganda handbooks they give to their respective supporters - of course they give their supporters propaganda handbooks! - specifically suggest the supporters portray their chosen cause as an underdog. It's also why every time BBC or someone shows a clip about the region, they get complaints from people who thought it didn't make their chosen side seem weak enough!)

And there aren't many mitigating factors. Even when the underdog is obviously completely doomed, we still identify with them: witness Leonidas at Thermopylae. Even when the underdog is evil and the powerful faction is good, we can still feel a little sympathy for them; I remember some of my friends and I talking about bin Laden, and admitting that although he was clearly an evil terrorist scumbag, there was still something sort of awesome about a guy who could take on the entire western world from a cave somewhere.

I say "strangest" because I can't make heads or tails of why evolutionary psychology would allow it. Let's say Zug and Urk are battling it out for supremacy of your hunter-gatherer tribe. Urk comes to you and says "Hey, my faction is really weak. We don't have a chance against Zug, who is much stronger than us. I think we will probably be defeated and humiliated, and our property divided up among Zug's supporters."

The purely rational response seems to be "Wow, thanks for warning me, I'll go join Zug's side right now. Riches and high status as part of the winning faction, here I come!"

Now, many of us probably would join Zug's side. But introspection would tell us we were opposing rational calculation on Zug's side to a native, preconscious support for Urk. Why? The native preconscious part of our brain is usually the one that's really good at ending up on top in tribal power struggles. This sort of thing goes against everything it usually stands for.

I can think of a few explanations, none of them satisfying. First, it could be a mechanism to prevent any one person from getting too powerful. Problem is, this sounds kind of like group selection. Maybe the group does best if there's no one dictator, but from an individual point of view, the best thing to do in a group with a powerful dictator is get on that dictator's good side. Any single individual who initiates the strategy of supporting the underdog gets crushed by all the other people who are still on the dictator's team.

Second, it could be a mechanism to go where the rewards are highest. If a hundred people support Zug, and only ten people support Urk, then you have a chance to become one of Urk's top lieutenants, with all the high status and reproductive opportunities that implies if Urk wins. But I don't like this explanation either. When there's a big disparity in faction sizes, you have no chance of winning, and when there's a small disparity in faction sizes, you don't gain much by siding with the smaller faction. And as size differential between groups increases, the smaller faction's chance of success should drop much more quickly than the opportunities for status with the smaller faction should rise.

So I admit it. I'm stumped. What does Less Wrong think?

102 comments

Comments sorted by top scores.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T00:32:20.893Z · LW(p) · GW(p)

1) If Zug wins, they'll be a stronger threat to you than Urk. Hunter-gatherer tribes have a carefully maintained balance of power - chieftains are mostly an invention of agriculture.

2) "When I face an issue of great import that cleaves both constituents and colleagues, I always take the same approach. I engage in deep deliberation and quiet contemplation. I wait to the last available minute and then I always vote with the losers. Because, my friend, the winners never remember and the losers never forget." -- Sen. Everett Dirksen

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-05T01:24:25.162Z · LW(p) · GW(p)

Good explanations, but a couple quibbles:

1) This explanation seems to presume that the disutility of "Zug wins" is of larger magnitude than the disutility of "Allied with the losing side" proportional to the likelihood of Zug winning. This is not necessarily implausible, but is it likely to have been sufficiently common to exert selection pressure?

2) This explanation presumes that Urk retains sufficient influence after a failed bid for power that the disutility of "Urk hates your stinking guts" is larger than the disutility of "Allied with the losing side". Clearly the case in the Senate, but elsewhere?

Replies from: bogdanb
comment by bogdanb · 2009-04-05T11:03:04.214Z · LW(p) · GW(p)

The central part of Eliezer's comment, in my reading, is that for the vast majority of the time humans evolved they were in a hunter-gatherer tribe format, where the group size was low (other research discussed here indicate an upper-bound of around 50).

In such groups it seems plausible that status “victories” are not absolute, and the power difference between the large and little side is rarely huge. Also, the links between members of two factions are very tight—they all know each other, they're closely related biologically, and they depend on each other tightly for survival.

Some examples: It's unlikely that in a 30/20, or even 40/10 split, the loosing side is massacred: it's still a large fraction of the group, and loosing it completely would reduce the group's survivability. Also, its members are probably children or siblings of members of the winning side, so even if Grog supports Zug because he seems like a better hunter, Grog'll be upset if Zug kills his son Trok, who sided with Urk because he's younger.

The balance of power can slide easily, for instance if Zug gets older, or if he's injured in a hunt. (Actually, it seems common enough that in all status-organized “societies”, including wolves and lions, that the leader is often challenged by “underdogs”, one of which will eventually become leader. Which is why challenges are rarely lethal.)

Our intuition (for judging the sides and such) is shaped in a large part by current society sizes (e.g., “my vote doesn't matter”), because it's a neural process, but instincts are probably still predominantly shaped around few-dozen-person group sizes, since it's genetics based.

EDIT: Another point: underdogs in the ancestral environment would tend to be the younger side. Which means a child or a niece or something like that. Which means that the incentive to help them is a bit stronger than just group selection.

Replies from: Grognor
comment by Grognor · 2012-02-11T17:17:37.818Z · LW(p) · GW(p)

even if Grog supports Zug because he seems like a better hunter, Grog'll be upset if Zug kills his son Trok, who sided with Urk because he's younger.

Neither Grog nor Grognor would allow his own son to die to such an undignified neophyte as Zug. Then again, who does Trok think he is, going against his father like that?

comment by SoullessAutomaton · 2009-04-05T00:15:58.921Z · LW(p) · GW(p)

It occurs to me that there may be a critical difference between voicing sympathy for a weak faction, vs. actually joining it and sharing its misfortunes.

That is to say, a near-optimal strategy in Zug vs. Urk, assuming one is currently unaffiliated and not required to join either side, is to do as much as possible to support Urk without angering Zug and incurring penalties. As a latecomer you'd get little benefit from joining Zug anyways, but in the chance of a surprise upset, when Urk comes to power you will be more likely to benefit than uninvolved parties or active Zug supporters.

Replies from: Andy_McKenzie
comment by Andy_McKenzie · 2009-04-05T15:48:46.914Z · LW(p) · GW(p)

If everybody in the tribe has this adaptation, then it will no longer be useful because everybody will be supporting the underdog. The optimal strategy, then, is not to support the underdog per se but instead to support the cause that less people support, factoring in the rough probabilities that both Zug and Urk have to win. How would this yield a systematic bias toward favoring the underdog? It would only occur if in the modern world we still suspect that the majority will favor the team more likely to win.

Replies from: Dojan
comment by Dojan · 2013-10-23T02:34:22.941Z · LW(p) · GW(p)

Well, this depends on what level the average player is playing at; but at every level there is going to be more noise, and thus less evolutionary pressure. My friend told me that his teacher had told his class that, in practice, most people play on the second or third levels. (I have nothing to back that up with, I know nothing about stock trading)

comment by MBlume · 2009-04-05T08:24:10.721Z · LW(p) · GW(p)

My friend Cheryl suggests a non ev-psych response. Each of us is, in many senses, an underdog. We are out of the ancestral environment, and are part of societies that are too darn large. We feel like underdogs, and so when we see another, we perceive a similarity of circumstance which enhances our feelings of sympathy.

Replies from: orthonormal, gworley, Andy_McKenzie
comment by orthonormal · 2009-04-05T16:48:46.954Z · LW(p) · GW(p)

Children's social worlds aren't as large as adults', so one prediction this model makes is that children raised in small social worlds (homeschooling or other small communities) should have much less of an underdog bias than adults or children who interact with many strangers.

Intuitively, I'd say that's probably not the case; but it bears testing.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2009-04-06T11:59:07.049Z · LW(p) · GW(p)

Maybe, but what about when those children discover that they are outside the norm? I'd imagine they might even be more likely to favor underdogs once they realize that they share the commonality of standing against the norm in some fashion.

comment by Gordon Seidoh Worley (gworley) · 2009-04-05T15:18:41.415Z · LW(p) · GW(p)

I like this idea. When we have to stretch too far to look for an explanation of a trait based only on that trait's effect on differential reproduction, it may be because there is no such explanation. Plenty of traits are the result of side effects that did not affect reproduction, and others may be cultural.

This idea has just what we need: it fits the experience, doesn't seem to affect reproduction, and is a side effect of sexually selected traits. When you add in a cultural component that may amplify or suppress this feeling of sympathy, you have what looks like a good explanation with no "just so"s necessary.

comment by Andy_McKenzie · 2009-04-05T15:40:16.736Z · LW(p) · GW(p)

I like this idea too. One prediction from it seems to be that those who feel less like underdogs (such as a Saudi Prince) will support underdogs less. One might find those who feel less like underdogs viageneral socieconomic status too, but since we have a fairly egalitarian society high income people might actually be more likely to have considered themselves an underdog during their formative years.

comment by Nominull · 2009-04-05T05:37:29.950Z · LW(p) · GW(p)

When you see two enemies fighting, you want them both to use up as many resources as possible. That way, the winner will be easy pickings for you. You accomplish this by supporting whoever is weaker. This is the sort of strategy that pops up in many multiplayer board games.

Replies from: AlanCrowe, Marshall
comment by AlanCrowe · 2009-04-05T12:46:28.937Z · LW(p) · GW(p)

At the Go club, some-one asked about using red, green, and blue stones instead of using black and white. The chap who is doing a PhD in game theory said: the two weakest players will gang up on the strongest player, *just like any truel".

I was surprised by the way he spoke immediately without being distracted from his own game. Study long enough and hard enough and it becomes automatic: gang up on the stronger.

Now humans have an intuitive grasp of social games, which raises the question: what would that algorithm feel like from the inside? Perhaps it gets expressed as sympathy for the underdog?

It might be possible to test this hypothesis. A truel is a three player game that turns into a duel after one player has been eliminated. That is why you side with the weaker of your two opponents. The experimental psychologist setting up his experiment can manipulate the framing. It the game theory idea is correct, sympathy for the underdog should be stronger when the framing primes the idea of a follow on duel.

For example if you frame America versus bin Laden as the battle of two totalising ideologies, will the world be dog-eat-dog Capitalist or beard-and-burka Islamic, that should boost underdog-sympathy. If you frame America versus bin Laden as pure tragedy, "Americans just want to stay at home eating, bin Laden really wanted to stay in Mecca praying, how came they ended up fighting?", that should weaken underdog sympathy.

I'm not sure how to set up such an experiment. May it could be presented as research into writing dialogue for the theatre. The experimental subject is presented with scripts for plays. In one play the quarreling characters see rivals they are discussing (e.g. Israel v Palestine, etc) as expansionist, in another play the quarreling characters see the rivals they are discussing as fated to fight then go home. The experimental subject is probed with various lines of dialogue for the characters, which either sympathise with the underdog or the overdog, and asked to judge which seems natural.

The hypothesis is that it is the characters that anticipate a follow on duel whose dialogue feels natural with sympathy for the underdog.

Replies from: andrewc
comment by andrewc · 2009-04-05T23:56:18.078Z · LW(p) · GW(p)

Interesting idea: we support the underdog because if push came to shove we'd have a better chance of besting them than the top dog? There's a similar problem I remember from a kids brainteaser book. Three hunters are fighting a duel, with rifles, to the death. Each has one bullet. The first hunter has a 100% chance of making a killing shot, the second a 50% chance, the third a 10% chance. What is the inferior hunter's best strategy?

Replies from: Larks
comment by Larks · 2009-08-16T20:07:58.406Z · LW(p) · GW(p)

The normal answer (fire away from either) only works if we assume the other hunters are vindictive, rather than rational. If we assume they behave rationally, then the third hunter should target the best.

Replies from: Broggly
comment by Broggly · 2010-11-03T14:06:45.115Z · LW(p) · GW(p)

Sure, if you're acting simultaneously If you're taking turns and you kill the best, then the mid-strength hunter will immediately fire on you. However if one of them shoots the other, then you'll have the first shot against the remaining one.

Replies from: Larks
comment by Larks · 2010-11-03T16:04:32.878Z · LW(p) · GW(p)

Yes, you're right. Larks@2009 hadn't studied any maths.

comment by Marshall · 2009-04-05T07:44:44.529Z · LW(p) · GW(p)

You are suggesting, that the strategies of a table game are applicable in life.

If you could choose, would you rather play Risk or have sex with a beautiful stranger?

If sex, you risk unrequited love, venereal disease, unwanted fatherhood, 25 years of marriage, many strange and random quarrels.

If you, Nominull, choose to play Risk, I would bet money on you winning.

If you choose sex, I would bet money on you losing.

comment by abramdemski · 2009-04-05T01:25:09.648Z · LW(p) · GW(p)

The following argument comes from an intro sociology text:

If there are three people competing, all of different strengths, it is worthwhile for the two weakest people to ban together to defeat the strongest person. This takes out the largest threat. (Specific game-theoretic assumptions were not stated.)

Doesn't this basically explain the phenomenon? If Zug kills Urk, I might be next! So I should ban together with Urk to defeat Zug. Even if Urk doesn't reward me at all for the help, my chances against Urk are better than my chances against Zug. (Under certain assumptions.)

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-05T02:50:37.286Z · LW(p) · GW(p)

Yes, this was my first thought too. Yvain thought of it and said

First, it could be a mechanism to prevent any one person from getting too powerful. Problem is, this sounds kind of like group selection.

It doesn't sound like group selection to me. How does it harm the group for one person to get very powerful? It is individual selection. When one man or small group dominates the tribe completely, and doesn't need your help, you don't get any of the good women.

BTW, EO Wilson has a book out supporting group selection.

"Among the Yanomamo" describes several hunter-gatherer bands. They all (my recollection) had leading men, but in the dysfunctional bands, the leading men were extremely powerful and there was no balance of power. They and their group of about a dozen supporters ruled through fear and exploited the rest of the band shamelessly. Life for those who were not in the key dozen at the top was significantly worse than life for people in the villages that had a different power structure; or at least different personalities in charge who didn't exhibit endless greed.

(A side note about group selection: This social pattern repeated itself in the groups that spawned off the original "infected" dysfunctional group; and all the other bands in the area hated and feared these dysfunctional groups. They were all aware that these particular bands were "sick" and dangerous and that it would be nice to wipe them out. Sounds like prime territory for some group selection.)

Replies from: abramdemski
comment by abramdemski · 2009-04-05T21:29:12.421Z · LW(p) · GW(p)

I agree, Yvain said it first, and it doesn't sound like group selection.

Concerning your group selection comment, that does sound plausible... but being relatively unfamiliar with tribal behavior, I would want to be sure that greedy genes were not spreading between groups before concluding that group selection could actually occur.

comment by taw · 2009-04-05T00:14:04.251Z · LW(p) · GW(p)

It's totally your second explanation. The stronger faction doesn't need you - value of you joining them is really tiny. The weaker faction needs you a lot - if you joining significantly alters the balance of power, they will reward you significantly.

Because of this mechanics of power, both coalitions are close to 50:50, and it's almost always in your best interest to join the slightly smaller one. For empirical evidence look for any modern democracy, with either coalitions of parties (most of continental Europe), or of interest groups (USA). Coalitions tend to have no sense whatsoever - blacks and gays and labour and lawyers vs born-again Christians and rich people and rural poor and racists? Does it make any sense? Not at all, but the 50:50 balance is very close.

I believe without that much evidence (I've seen some mentioned in context of game theory, so I guess someone has it) that this kind of almost 50:50 coalition making is very common in tribal societies, so it might very well be very common in our ancestral environment. In which case sympathy for the underdog makes sense.

Also notice that this is just one of many forces, it will be decisive only in cases where the coalitions are almost even otherwise, just as predicted. If one coalition is far bigger than the other, or you're more aligned with one that the other, sympathy for the underdog won't be strong enough to overcome those.

Replies from: loqi, Marshall
comment by loqi · 2009-04-05T00:28:00.895Z · LW(p) · GW(p)

The stronger faction doesn't need you - value of you joining them is really tiny. The weaker faction needs you a lot.

But we're talking about a zero-sum situation. The stronger faction needs you not to join the weaker faction exactly as much as the weaker faction needs you to join.

Replies from: Zvi
comment by Zvi · 2009-04-05T12:05:07.419Z · LW(p) · GW(p)

You don't always have to join Zug or Urk. Often you can let them fight it out and remain neutral, or choose how much of your resources to commit to the fight. Urk needs everything you have, whereas Zug would be perfectly happy to see you do nothing and in most conflicts most people stay out of it. Because of this Zug can't afford to go around rewarding everyone for not joining Urk the same way Urk can reward you for joining him.

comment by Marshall · 2009-04-05T05:34:27.280Z · LW(p) · GW(p)

I like the thought, that we wish balance and vote accordingly.

comment by Psychohistorian · 2009-04-05T18:48:29.771Z · LW(p) · GW(p)

I'm not sure the evidence of the proposed bias supports the type of ev-psych responses being offered.

The only cases I'm aware of underdog bias actually mattering are of the Israel-Palestine type, not the Zug-Urk type. I-P poses no significant costs or benefit to the individual. Z-U poses tremendous costs or benefits to the individual. I don't imagine I-P type support decisions meaningfully affect reproductive success. Unless there's evidence that people still side with the underdog when it really costs them something, these ev-psych explanations seem to be explaining something that doesn't happen.

I would posit that it's cultural, and it's fictional availability bias. In all of our stories, the underdog is invariably the good guy. It seems very difficult to tell a story about good giant multinational corporation beats evil little old lady. The reverse has been quite successful. Consequently, we tend to side with the underdog because we generalize from a great deal of fictional evidence that "proves" that the underdog is the good guy. This also explains why we stick with an underdog even when he ceases to be an underdog, as this is a typical pivot point in a story.

This raises the question of why this kind of story is so successful, which I admit I don't have a great answer to.

Replies from: Jack
comment by Jack · 2009-04-06T03:42:07.384Z · LW(p) · GW(p)

It doesn't just raise the question, it begs the question.

Replies from: Psychohistorian
comment by Psychohistorian · 2009-04-06T20:59:27.344Z · LW(p) · GW(p)

Not really. 'Why does "underdog beats overdog" make a more interesting story that "overdog beats underdog" 'is a very different question from 'Why do we tend to side with the underdog when no costs are imposed on us?'

Providing an alternative mechanism but not being able to fully explain its causes is hardly begging the question.

Replies from: Jack, thomblake
comment by Jack · 2009-04-07T20:23:42.865Z · LW(p) · GW(p)

Yes, the following i true "Why does "underdog beats overdog" make a more interesting story that "overdog beats underdog" 'is a very different question from 'Why do we tend to side with the underdog when no costs are imposed on us?'

But your distinguishing two questions that weren't distinguished in either your comment or the post. The post asks why we tend to support the underdog. In the initial post the "supporting" consists of verbally saying "I support x" and then, later, identifying with the underdog in a story (i.e. Leonidas and bin Laden). You come back and say well look, maybe our selection of fiction leads us to think the underdog must be the good guy. But as I understood the initial question part of what we were seeking to learn was why we identify with the underdogs in stories. I take identifying with a fictional character to be equivalent to "siding with them without an imposed cost".

So as I took the initial question your explanation for some of the underdog phenomenon merely attributes the cause to other parts of the phenomenon and fails to get at the root cause. Indeed, nearly every significant pattern in human behavior will have been documented in fiction, so of one could claim fictional availability bias about lots of things (mating rituals, morality, language use, etc.) but its all chicken and egg until you explain HOW THE FICTION GOT THAT WAY.

Replies from: thomblake
comment by thomblake · 2009-04-07T20:35:34.558Z · LW(p) · GW(p)

That's not begging the question. I don't see an argument being made with the conclusion as a premise. Perhaps you could be more explicit and concise?

That "underdog beats overdog" makes an interesting story does not require that we side with the underdog. Just like "dog bites man" is less interesting than "man bites dog", regardless of who you side with.

Replies from: Jack
comment by Jack · 2009-04-08T04:07:59.606Z · LW(p) · GW(p)
  1. We side with the underdog. 1A. Polling on Israel-Palestine shows a shift in support given to the side that appears to be the underdog. 1B. Despite being evil we sort of think bin Laden is cool for taking on the US by himself. 1C. When we tell stories we tend to identify with and root for the underdog, i.e. Leonidas.

When we want to know why (1.) I take it that any explanation that includes any of the sub-premises is question begging.

Psychohistorian's response was that (1) is caused by the fact that in our stories the underdog is always the side we identify with and root for and this leads us to assume that the underdog is the "good side" and therefore side with the underdog. But as I took the question (1) part of what needed explaining was underdog identification in stories.

This mess about what makes an "interesting story" was added after the initial comment and it confuses things. As I took the initial comment the only evidence being presented was the vast collection of pro-underdog stories and the dearth of pro-overdog stories and this was taken to be sufficient lead us to side with the underdog. I don't think this response is especially helpful because part of our reason for even thinking that there is an underdog bias is the fiction. Throwing in "interesting" adds another step to the argument and this version might not be begging the question anymore (though I'm not convinced of that either).

comment by thomblake · 2009-04-07T14:05:02.278Z · LW(p) · GW(p)

He might have meant 'begs the question' in the colloquial sense, which people really should stop doing.

Replies from: Jack
comment by Jack · 2009-04-07T20:24:09.275Z · LW(p) · GW(p)

If I had meant this the comment would have made no sense.

comment by gjm · 2009-04-05T00:40:25.145Z · LW(p) · GW(p)

Here's another explanation (a bit like taw's). I don't find it terribly convincing either, but I don't see an outright refutation.

Suppose you have kin, or others whose welfare (in the relevant senses) is correlated with yours. Obviously you'll tend to help them. How much, and how urgently? Much more when they're in worse trouble. (As taw says, when they're in a strong position they don't need your help, so most likely your own more direct interests matter more to you.) So there's value in having a mechanism that makes you care more about people you'd have cared about anyway when they're underdogs.

Well, evolution tends to produce hacks layered on hacks, so maybe the mechanism we actually got was one for making you care about everyone more when they're underdogs. When they're random strangers, the effect isn't strong enough to make you do much more than think "oh, I hope they survive"; if they're actually enemies, it isn't strong enough to make you switch sides (Yvain and his friends didn't actually start sending money to al Qaeda just because there's something a bit awesome about taking on the whole of Western civilization from a cave in Afghanistan). But when it's someone whose welfare you really care about, it can make the difference between acting and not acting.

Note that it's beneficial (evolutionarily, I mean) to have such a reaction not only for close kin but whenever the underdog is closer to you than the oppressors. For instance, some random person is being attacked by wolves: your genes benefit (in competition with the wolves') if you help them survive.

comment by rwallace · 2009-04-05T13:28:33.802Z · LW(p) · GW(p)

It's worth bearing in mind how people actually behave: if Zug is so powerful and vengeful that opposing him would be flat-out suicide, people don't. They may quietly nurse grudges and wait for misfortune to befall him, but they don't take overt action. Siding with Urk is a lot more understandable once we note that people only actually do it when it is reasonably safe to do so.

comment by timtyler · 2009-04-05T09:55:00.679Z · LW(p) · GW(p)

Partly our empathy circuits. Humans like to help - and like to be seen to be helping. The underdog is the party that most obviously needs assistance.

comment by InquilineKea · 2009-04-05T01:32:41.710Z · LW(p) · GW(p)

Might I add Dunbar's number to this? Large powerful groups have a tendency to split (especially hunter-gatherer ones). And once they split, they often become each other's enemies. Oftentimes, it's better for the individual to be the underdog when the underdog is a group that is less likely to split.

Alternatively, let's ponder this situation: you're part of a group, a single one of many possible groups. Your group has interests in supporting the weaker groups if your group wishes to survive (of course you may be okay with having your group absorbed into another group - but remember - in hunter gatherer days, it was often difficult to be absorbed in another group with an entirely different culture from yours).

This, incidentally, reminds me a lot of the scene in Romance of Three Kingdoms where the minor warlord Zhang Xiu was wondering whether to join Cao Cao (the underdog) or Yuan Shao. His brilliant adviser told him to join Cao Cao, who ultimately toppled Yuan Shao.

Replies from: thomblake
comment by thomblake · 2009-04-07T14:15:21.445Z · LW(p) · GW(p)

This, incidentally, reminds me a lot of the scene in Romance of Three Kingdoms where the minor warlord Zhang Xiu was wondering whether to join Cao Cao (the underdog) or Yuan Shao. His brilliant adviser told him to join Cao Cao, who ultimately toppled Yuan Shao.

Yes, I think this is exactly the sort of truel situation that is talked about elsewhere.

comment by Andy_McKenzie · 2009-04-05T15:32:28.286Z · LW(p) · GW(p)

How about Terror Management Theory? By supporting a cause that is probably going to win anyway, we gain little. But by supporting an unlikely cause such as Leonidas at Thermopylae, there is an increased possibility that if we succeed our accomplishments will live on past us, because it is so incredible. In this way, we would become immortal. One prediction from this explanation is that the greater the disparity between the underdog and the overdog the larger the preference towards the underdog will be, which seems to be backed up empirically (see the increased preference for Slovenia vs. Sweden in the referenced study).

Replies from: Jack
comment by Jack · 2009-04-06T03:50:41.382Z · LW(p) · GW(p)

"By supporting a cause that is probably going to win anyway, we gain little. But by supporting an unlikely cause such as Leonidas at Thermopylae, there is an increased possibility that if we succeed our accomplishments will live on past us, because it is so incredible."

There are a couple problems with this. First, we might join Leonidas on those grounds but why would we root for him on those grounds? We're not going to be remembered that way. Second, if one wants to be remembered one is probably best off just being on the side of the winners. Winners write history. Finally, this could explain the motivation of the underdog but I don't think it explains the way we seem to be wired to root for the underdog (either biologically or culturally).

comment by swestrup · 2009-04-05T03:52:56.545Z · LW(p) · GW(p)

My first thought was to assume it was part of the whole alpha-male dominance thing. Any male that wants to achieve the status of alpha-male starts out in a position of being an underdog and facing an entrenched opposition with all of the advantages of resources.

But, of course, alpha-males outperform when it comes to breeding success and so most genes are descended from males that have confronted this situation, strove against "impossible" odds, and ultimately won.

Of course, if this is the explanation, then one would expect there to be a strong difference in how males and females react to the appearance of an underdog.

comment by gwern · 2009-04-05T01:00:13.495Z · LW(p) · GW(p)

The proffered explanations seem plausible. What about with ideas though? I think it's social signaling: 'Look how clever and independent and different I am, that I can adopt this minority viewpoint and justify it.'

(Kind of like Zahavi's handicap principle.)

EDIT: It appears I largely stole this variant on signaling strategy from http://www.overcomingbias.com/2008/12/showoff-bias.html . Oh well.

Replies from: Yvain, SoullessAutomaton
comment by Scott Alexander (Yvain) · 2009-04-05T01:20:46.061Z · LW(p) · GW(p)

Your mention of signaling gives me an idea.

What if the mechanism isn't designed to actually support the underdog, but to signal a tendency to support the underdog?

In a world where everyone supports the likely winner, Zug doesn't need to promise anyone anything to keep them on his side. But if one person suddenly develops a tendency to support the underdog, then Zug has to keep him loyal by promising him extra rewards.

The best possible case is one where you end up on Zug's side, but only after vacillating for so long that Zug is terrified you're going to side with Urk and promises everything in his power to win you over. And the only way to terrify Zug that way is to actually side with Urk sometimes.

Replies from: RobinHanson, SoullessAutomaton, loqi
comment by RobinHanson · 2009-04-05T03:30:13.983Z · LW(p) · GW(p)

It seems that supporting an underdog is a more impressive act - it suggests more confidence in your own abilities, and your ability to withstand retribution from the overdog. I'm not sure we do actually support the underdog more when a costly act is required, but we probably try to pretend to support the underdog when doing so is cheap, so we can look more impressive.

comment by SoullessAutomaton · 2009-04-05T01:35:35.540Z · LW(p) · GW(p)

In other words, if Zug believes you to be the kind of agent who will make the naively rational decision to side with him, he will not reward you. You then side with Zug, because it makes more sense.

However, if Zug believes you to be the kind of agent who will irrationally oppose him unless bribed, he will reward you. You then side with Zug, because it makes more sense.

This seems to be another problem of precommitment.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T05:54:54.358Z · LW(p) · GW(p)

While my own decision theory has no need of precommitment, it's interesting to consider that genes have no trouble with precommitments; they just make us want to do it that way. The urge to revenge, for example, can be considered as the genes making a sort of believable and true precommitment; you don't reconsider afterward, once you get the benefits, because - thanks to the genes - it's what you want. The genes don't have quite the same calculus as an inconsistent classical decision theorist who knows beforehand that they want to precommit early but will want to reconsider later.

comment by loqi · 2009-04-05T01:45:04.233Z · LW(p) · GW(p)

But Zug probably doesn't care about just one person. Doesn't the underdog bias still require a way to "get off the ground" in this scenario? Siding with Urk initially flies in the face of individual selection.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T05:51:56.536Z · LW(p) · GW(p)

Zug can be only slightly more powerful than Urk to start with, and then as more individuals have the adaptation, the power difference it's willing to confront will scale. I.e. this sounds like it could evolve incrementally.

Replies from: loqi
comment by loqi · 2009-04-05T06:02:16.317Z · LW(p) · GW(p)

Ah, makes sense. The modern bias seems specifically connected to major differences, but that doesn't exclude milder origins.

comment by SoullessAutomaton · 2009-04-05T01:08:31.628Z · LW(p) · GW(p)

Social signalling explains almost everything and predicts little. By law of parsimony, supporting underdog ideas seems much likelier to me as a special case of the general tendency Yvain is considering.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-04-05T04:37:09.547Z · LW(p) · GW(p)

Social signalling explains almost everything and predicts little.

In this case, the social signaling interpretation predicts a discrepancy between peoples' expressed preferences in distant situations, and peoples' felt responses in situations where they can act.

We can acquire evidence for or against the social signaling interpretation by e.g. taking an "underdog" scene, where a popular kid fights with a lone unpopular kid, and having two randomized groups of kids (both strangers to the fighters): (a) actually see the fight, as if by accident, nearby where they can in principle intercede; or (b) watch video footage of the fight, as a distant event that happened long ago and that they are being asked to comment on. Watch the Eckman expressions of the kids in each group, and see if the tendency to empathize with the underdog is stronger when signaling is the only issue (for group (b)) than when action is also a possibility (for group (a)). A single experiment of this sort wouldn't be decisive, but with enough variations it might.

Replies from: cousin_it
comment by cousin_it · 2009-04-05T12:18:32.845Z · LW(p) · GW(p)

Your experiment wouldn't convince me at all because the video vs reality distinction could confound it any number of ways. That said, I upvoted you because no one else here has even proposed a test.

comment by RobinHanson · 2009-04-05T13:13:19.353Z · LW(p) · GW(p)

There are a log of good thoughts in these comments, but they are scattered. I can see value in someone collecting them into an organized summary of the plausible arguments on this topic.

comment by Mario · 2009-04-05T09:49:30.688Z · LW(p) · GW(p)

I don't think it is necessarily true that merely by joining the faction most likely to win you will share in the spoils of victory. Leaders distribute rewards based on seniority more than support. In a close contest, you would likely be courted heavily by both sides, providing a temporary boost in status, but that would disappear once the conflict is over. You will have not earned the trust of the winner since your allegiance was in doubt. I don't think there is much to gain by joining the larger side late; you'll be on the bottom of society once the dust settles, trusted by neither the winners nor the losers.

In cases like this, I think the operative value evolution would select for is not political success but sexual success. Being one of many followers does nothing to advertise ourselves as desirable mates. On the other hand, bravely fighting a losing battle (as long as you don't die in the process) signals both physical prowess (which you may not get in a lopsided victory) and other desirable traits, like courage. When the battle is over, one can assume that more money and women would be distributed to the new elite, but their children will be yours.

Replies from: AspiringKnitter
comment by AspiringKnitter · 2012-01-25T03:06:16.905Z · LW(p) · GW(p)

That should predict this bias to be stronger in men. After all, more partners, past a certain point, isn't really helpful to women's reproductive success, plus I'd be surprised if men sought courageous mates (if they go and get themselves killed before your baby is born...). So, is this bias stronger in men?

comment by AlexU · 2009-04-05T00:54:45.218Z · LW(p) · GW(p)

In a confrontation between two parties, it's more likely that the stronger one will pose the greater threat to you. By supporting the underdog and hoping for a fluke victory, you're increasing your own survival odds. It seems we're probably evolved to seek parity -- where we then have the best chance of dominating -- instead of seeking dominant leaders and siding with them, which is a far more complex and less certain process.

Am I missing something? Also, it would be interesting to see whether females and males have the same reactions toward the overdog.

Replies from: steven0461, nescius
comment by steven0461 · 2009-04-05T00:58:28.339Z · LW(p) · GW(p)

The problem with things like "seeking parity" is that your actions play only a small part in determining the outcome of the conflict, whereas your actions play a much larger part in determining consequences to your post-conflict status.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T05:57:44.071Z · LW(p) · GW(p)

Not if others also side with the underdog, and punish those who side with the overdog - perhaps by viewing them as "craven" or "toadying" and treating them accordingly. People seem to have an odd respect for supervillains, but do we respect the henchmen?

comment by nescius · 2009-04-05T22:09:56.976Z · LW(p) · GW(p)

I also wonder about possible sex differences. Some information is available:

The Appeal Of The Underdog:

There was no significant effect, t(69) = 1.30, p = .19, though caution is warranted because of imbalanced samples. In fact, across all four studies reported in this article, there were no sex differences on the main dependent variables (all _p_s > .19).

comment by teageegeepea · 2009-04-05T23:05:53.594Z · LW(p) · GW(p)

This problem seems even to afflict Mencius Moldbug. His ideology of formalism seems to be based on ensuring absolute unquestionable authority in order to avoid any violence (whether used to overthrow an authority or cement the hold of an existing one). At the same time he tries to base the appeal of his reactionary narrative by pointing highlighting how reactionaries are "those who lost" (in the terms of William Appleman Williams, whom Mencius would rather not mention) and the strong horse is universalism/antinomianism.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-07T04:33:35.236Z · LW(p) · GW(p)

the strong horse is universalism/antinomianism.

What does that mean? The whole clause. And I don't understand why you equate universalism with antinomianism.

Replies from: Joe
comment by Joe · 2009-07-14T00:55:24.805Z · LW(p) · GW(p)

Perhaps you figured this out since April, but the quoted clause makes sense in the context of Mencius' particular use of the terms "universalism" (roughly: what everyone in polite society believes these days in the West) which he categorizes as "antinomian", roughly: opposed to natural law.

comment by jimmy · 2009-04-05T02:05:54.301Z · LW(p) · GW(p)

Depending on the group size, the underdog might not be the underdog anymore with your support.

If it's a small group thing (or you have significant power) it is likely that you can determine which side wins.

The underdogs may have more at stake than the winners, and would be willing to give more in return for help. If Bob steals half of Fred's bananas every day, Bob gets to be a little better fed, and Fred dies.

If you help Fred out, he owes you his life, but Bob doesn't care nearly as much if it just means he has to go back to eating only his own bananas (that or you kill him).

If you choose to help Bob, your help isn't worth anything since he had it under control anyway.

Replies from: orthonormal
comment by orthonormal · 2009-04-05T03:47:56.476Z · LW(p) · GW(p)

I think this instinct may in fact be evolutionarily optimized for conflicts between individuals; in most group conflicts in the ancestral environment, you probably already belong to one of the sides.

But yes, it does seem to generalize too readily to conflicts where you personally wouldn't sway the balance.

EDIT: How could we test any of the above theories? My theory seems to predict that describing the conflict as "one single entity versus another" (and triggering modes of thought optimized for third parties to single combat) will give a stronger underdog bias than describing a collection of entities on each side (with one collection much larger than the other).

comment by simplicio · 2013-07-12T21:20:13.270Z · LW(p) · GW(p)

Theory: supporting the underdog is a relatively costless longshot bet. Prediction: it will primarily occur in situations when opposing the overdog (verbally) can be done with impunity or secretly.

Overdog wins: no real consequences.

Underdog wins: "I supported you from the beginning! Can I be your trusted lieutenant?"

comment by Kenny · 2009-04-12T18:13:28.556Z · LW(p) · GW(p)

No one supports the underdog if they're a member, or a fan, of the overdog – only the unaffiliated are free to root for the underdog.

comment by Roko · 2009-04-05T12:36:13.690Z · LW(p) · GW(p)

"By comparison, my informal experiments trying to teach people relevant facts about the region's history changed opinion approximately zero percent."

ROFL... Maybe you're trying with people who are either too emotionally involved or not clever enough?

Replies from: JulianMorrison
comment by JulianMorrison · 2009-04-05T12:43:28.082Z · LW(p) · GW(p)

OK, what's YOUR position, and how much do you know? Then Yvain can dump historical facts on you, and we'll see how far you shift and in what direction.

Replies from: Roko
comment by Roko · 2009-04-05T14:04:21.925Z · LW(p) · GW(p)

So, my position is:

  • Israel/Palestine is a significant global risk. Their squabling and fundamentalism could easily escalate to kill us all

  • Therefore, I am for peace in the Middle east irrespective of which faction gains most through that peace.

This is quite a utilitarian position. But that isn't much of a problem for me as my emotional involvement is pretty low. I can afford to be cool and calculating about this one. What do I know? Mostly facts gained through casual Wikipedia'ing.

  • Israel is more competent than the Arabs, again and again they have proved to be the side with the most intelligence and military effectiveness. E.g. Yom-Kippur, Osiraq, etc.

  • That does not mean that Israel are all nice guys.

  • Nor does it mean that the Arab nations are nice guys

  • For me, living in an Arab country would be hell. They disvalue freedom, equality, rational secular enlightenment values, knowledge - basically everything I stand for. I am therefore weakly incentivized to make sure that the Arabic/Islamic culture complex doesn't get too powerful.

  • Israeli secret services etc are creepy. They kidnap people. Not cool. But overall this seems to be balanced by the fact that Israel contains a lot of people I would probably like - people who share my values.

Replies from: loqi, JulianMorrison
comment by loqi · 2009-04-05T19:26:31.355Z · LW(p) · GW(p)

This is indeed a pretty utilitarian position. I think the objection you're likely to run into is that by evaluating the situation purely in terms of the present, it sweeps historic precedents under the rug.

Put another way, the "this conflict represents a risk, let's just cool it" argument can just as easily be made by any aggressor directly after initiating the conflict.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T19:29:14.944Z · LW(p) · GW(p)

Yup. If you don't punish aggressors and just demand "peace at any price" once the war starts, that peace sure won't last long.

Replies from: Roko
comment by Roko · 2009-04-05T20:36:17.777Z · LW(p) · GW(p)

If I expected the current geopolitical situation to continue for a long time, I would agree. But neither of us do; we both place a high probability on either FAI or uFAI within 100 years; the top priority is to just survive that long.

Also, even if you assign some probability to no singularity any time soon, the expected rewards for a situation where there is a singularity soon are higher, as you get to live for a lot longer, so you should care more about that possibility.

Replies from: JulianMorrison
comment by JulianMorrison · 2009-04-05T21:07:08.486Z · LW(p) · GW(p)

(I yesterday heard someone who ought to know say AI at human level, and not provably friendly, in 16 years. Yes my jaw hit the floor too.)

I hadn't thought of the "park it, we have bigger problems", or "park it, Omega will fix it" approach, but it might make sense. That raises the question, and I hope it's not treading to far into off-LW-topic: to what extent ought a reasoning person act as if they expected a gradual and incremental change in the status quo, and to what extent ought their planning to be dominated by expectation of large disruptions in the near future?

Replies from: Roko, Eliezer_Yudkowsky
comment by Roko · 2009-04-05T22:16:29.530Z · LW(p) · GW(p)

Well, if you actually believe the kinds of predictions that say the singularity is coming within your lifetime, you should expect the status quo to change. If you don't, then I'd be interested to hear your argument as to why not.

Replies from: JulianMorrison
comment by JulianMorrison · 2009-04-06T00:03:07.172Z · LW(p) · GW(p)

The question I was struggling to articulate was more like: should I give credence to my own beliefs? How much? And how to deal with instinct that doesn't want to put AI and postmen in the same category of "real"?

Replies from: Roko
comment by Roko · 2009-04-06T09:18:03.941Z · LW(p) · GW(p)

If you don't give credence to them ... Then they're not your beliefs! If you go to Transhumanist events, profess to believe that a singularity is likely in 20 years, but then when someone extracts concrete actions you should take in your own life that would be advantageous if and only if the singularity hypothesis is true, and you feel hesitant, then you don't really believe it.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T21:15:57.588Z · LW(p) · GW(p)

Who on Earth do you think ought to know that?

Replies from: JulianMorrison
comment by JulianMorrison · 2009-04-05T21:17:34.886Z · LW(p) · GW(p)

Shane Legg, who was at London LW meetup.

Replies from: Roko
comment by Roko · 2009-04-05T22:13:19.775Z · LW(p) · GW(p)

Shane expressed this opinion to me too. I think that he needs to be more probabilistic with his predictions, i.e. give a probability distribution. He didn't adequately answer all of my objections about why neuro-inspired ai will arrive so soon.

Replies from: JulianMorrison, whpearson
comment by JulianMorrison · 2009-04-05T23:29:25.838Z · LW(p) · GW(p)

From what he explained, the job of reverse engineering a biological mind is looking much easier than expected - there's no need to grovel around at the level of single neurons, since the functional units are bunches of neurons, and they implement algorithms that are recognizable from conventional AI.

Replies from: Eliezer_Yudkowsky, Roko
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-06T11:31:54.852Z · LW(p) · GW(p)

This sounds like a statement made by some hopeful neuromodeler looking for funding rather than a known truth of science.

Replies from: JulianMorrison
comment by JulianMorrison · 2009-04-06T15:39:51.240Z · LW(p) · GW(p)

You want the details? Ask the pirate, not the parrot.

Rawwrk. Pieces of eight.

comment by Roko · 2009-04-06T10:23:41.297Z · LW(p) · GW(p)

Yes, but when we got into detail about how this might work and what the difficulties might be, I had some significant objections that weren't answered.

comment by whpearson · 2009-04-05T22:22:07.741Z · LW(p) · GW(p)

I think it would make an interesting group effort to try and estimate the speed of neuro research to get some idea of how fast we can expect neuro-inspired AI.

I'm going to try and figure out the number of researcher working on figuring out the algorithms for long term changes to neural organisation (LTP, neuro plasticity and neuro genesis). I get the feeling it is a lot less than those working on figuring out short term functionality, but I'm not an expert and not submerged in the field.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-04-06T04:45:00.437Z · LW(p) · GW(p)

Please do; this sounds extremely valuable.

Replies from: Roko
comment by Roko · 2009-04-06T10:24:39.976Z · LW(p) · GW(p)

I would do this with shane: but I think it might be off topic at the moment.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-06T12:12:49.418Z · LW(p) · GW(p)

Ja, going off-topic.

comment by JulianMorrison · 2009-04-05T19:03:09.851Z · LW(p) · GW(p)

Are you sure you're not playing "a deeply wise person doesn't pick sides, but scolds both for fighting"?

Replies from: Roko
comment by Roko · 2009-04-05T20:37:54.007Z · LW(p) · GW(p)

Maybe. Though, I am not consciously doing this. See my above response to EY.

comment by Marshall · 2009-04-05T05:37:29.230Z · LW(p) · GW(p)

Maybe we just don't like overdogs, bullies in the schoolyard. They are randomly dangerous.

Replies from: Marshall
comment by Marshall · 2009-04-05T07:31:19.804Z · LW(p) · GW(p)

Yvain suggests a bias towards underdogs, I am suggesting a bias away from overdogs. Why am I being voted down?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T09:08:45.185Z · LW(p) · GW(p)

You don't understand evolutionary psychology. You also don't know how to support an argument. "Just don't like"? That is precisely that which is to be explained. Nor are they randomly dangerous.

Look, I'm sorry, but you're on the wrong blog here. Read if you like, of course, but I don't think you're ready to be commenting. This is why you are often voted down. Sorry.

Replies from: Hans, Marshall
comment by Hans · 2009-04-05T11:50:30.200Z · LW(p) · GW(p)

I read your comment and I immediately wanted to vote up Marshall's original comment. After all, he's the underdog being criticized and chased away by the founder and administrator of this blog.

In the end, I didn't, probably for equally irrational reasons.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T13:11:24.851Z · LW(p) · GW(p)

(Blinks.)

I have to say, that frame on the whole problem had never occurred to me. No wonder online communities have such a hard time developing membranes.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-05T19:12:30.135Z · LW(p) · GW(p)

It's worse here, because for some reason when people like Marshall claim that "rationalist" means "treats any old crap like it was a worthy contribution", people here are sufficiently wary of confirmation bias to take it more seriously than it deserves.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T19:27:27.578Z · LW(p) · GW(p)

Yeah, I've noticed. If I were to make a list of the top 3 rationalist errors, they'd be overconfidence, overcomplication, and underconfidence.

Either that or there's some kind of ancient echo of protecting the underdog in effort to keep the tribal power balance.

comment by Marshall · 2009-04-05T16:55:31.845Z · LW(p) · GW(p)

You are actually being a little bit of a bully yourself - Eliezer.

I would have thought that dialogues and conversations were an important part of being a rationalist. And disagreement. I would not have thought that underexplained decrees and conformity played so high a role.

But I have your word for it, that I am wrong.

So be it.

From now on I will always smile, when I hear the word rationalist.

Replies from: Psychohistorian
comment by Psychohistorian · 2009-04-05T18:18:06.451Z · LW(p) · GW(p)

He didn't give you his word that you are wrong. He stated that your claim was not rigorous and that crucial parts of it required supporting evidence that you failed to provide.

He also claimed that you do not understand evolutionary psychology. (Edit) You have provided no evidence to dispute this claim. Of course you are probably not making an evpsych argument, so this comment is probably not necessary, but if it's wrong and you are, you might consider rebutting it.

Rational responses to this would include providing evidence in support of your claim or explaining how a this predisposition might form. "Maybe we just don't like overdogs" explains exactly nothing, except that we don't like overdogs, which has already been stated.

Conversations are important, but making statements with no evidenciary claim and, well, no claim that adds meaning, are not.

Replies from: nescius, Marshall
comment by nescius · 2009-04-05T23:48:24.227Z · LW(p) · GW(p)

"Maybe we just don't like overdogs" explains exactly nothing, except that we don't like overdogs...

One could interpret the phrase to suggest that focus in this forum may be being misleadingly directed towards the idea of support of underdogs rather than opposition of overdogs (Vandello's "top dog"s), to which underdog support may be secondary. The phenomena are not inversions of each other. At least, I haven't taken dislike of overdogs as being granted by the assertions of tendency for support of underdogs.

Perspective changes are often useful. This interpretable alternate notion may lead somewhere, while conflict resulting from an ungenerous (if accurate) understanding may not always be as fruitful as this particular incident appears to (heartwarmingly) be.

The linked paper says:

Although not directly examining underdog support, research on attitudes toward high achievers (what Feather, 1991, has labeled tall poppies) is also relevant. For instance, high achievers often elicit envy and resentment from others, particularly when the achievement is seen as undeserved ... and people often experience pleasure in seeing the mighty fall...

[edit: Note further discussion of "schadenfreude" on page 1614.]

My opinion of overdog spite, without having conducted or surveyed studies: I think it exists and has a not insubstantial effect on underdog support, but my guess is that the primary factor or factors in underdog support are not dependent on it. Thanks anyway, Marshall, for the idea, whether you intended it. I'll keep it nearby as I consider underdog support.

Replies from: Marshall
comment by Marshall · 2009-04-06T04:39:05.392Z · LW(p) · GW(p)

Yes - that was one of my points.

comment by Marshall · 2009-04-06T04:37:15.758Z · LW(p) · GW(p)

Thanks for trying to explain the rules of the game to me.

I have not at any point equated rationality with the scientific model. Scientific psychology is trivial (and 20% wrong) and inapplicable to living. Trying to stumble on happiness after reading Stumbling on Happiness if you don't believe me.

I do not think the long list of just so stories from the comments with various tailored scripts is evidence of anything other than following the bandwagon.

My story is taken from the schoolyard. My evidence is present to everyone, who has been to school and seen bullies at work. The evidence of your own eyes and your own experience.

But this is not your language-game. Fair enough. And stupid of me to try to extend the rules. Incommenserability is the name of that game.

As I said in a comment under "Truels". You have the option of metacommenting and being shot, or you can run away.

I regret that no-one criticises Eliezers highhandedness - that does not speak well of your community. And it puts to shame all thoughts of FRIENDLINESS under his tutelage.