Supporting the underdog is explained by Hanson’s Near/Far distinction

post by Roko · 2009-04-05T20:22:02.593Z · LW · GW · Legacy · 27 comments

Contents

27 comments

Yvain can’t make head nor tails of the apparently near universal human tendency to root for the underdog. [Read Yvain’s post before going any further]..

He uses the following plausible-sounding story from a small hunter-gatherer tribe in our Era of Evolutionary Adaptedness to illustrate why support for the underdog seems to be an antiprediction of the standard theory of human evolutionary psychology:

Suppose Zug and Urk are battling it out for supremacy in the tribe. Urk comes up to you and says “my faction are hopelessly outnumbered and will probably be killed, and our property divided up amongst Zug’s supporters.” Those cave-men with genes that made them support the underdog would join Urk’s faction and be wiped out. Their genes would not make it very far in evolution’s ruthless race, unless we can think of some even stronger effect that might compensate for this.

Yvain cites an experiment where people supported either Israel or Palestine depending on who they saw as the underdog. This seems to contradict the claim that the human mind is well adapted to its EEA.

A lot of people tried to use the “truel” situation as an explanation: in a game of three players, it is rational for the weaker two to team up against the stronger one. But the choice of which faction to join is not a truel between three approximately equal players: as an individual you will have almost no impact upon which faction wins, and if you join the winning side you won’t necessarily be next on the menu: you will have about as much chance as anyone else in Zug’s faction of doing well if there is another mini-war. People who proffered this explanation are guilty of not being more surprised by fiction than reality. To start with, if this theory were correct, we would expect to see soldiers defecting away from the winning side in the closing stages of a war... which, to my knowledge, is the opposite of what happens. 

SoulessAutomaton comes closest to the truth when he makes the following statement:

there may be a critical difference between voicing sympathy for the losing faction and actually joining it and sharing its misfortune.

Yes! Draw Distinctions!

I thought about what the answer to Yvain’s puzzle was before reading the comments – and decided that Robin’s Near/Far distinction is the answer.

All of these bring each other more to mind: here, now, me, us; trend-deviating likely real local events; concrete, context-dependent, unstructured, detailed, goal-irrelevant incidental features; feasible safe acts; secondary local concerns; socially close folks with unstable traits. 

Conversely, all these bring each other more to mind: there, then, them; trend-following unlikely hypothetical global events; abstract, schematic, context-freer, core, coarse, goal-related features; desirable risk-taking acts, central global symbolic concerns, confident predictions, polarized evaluations, socially distant people with stable traits. 

When you put people in a social-science experiment room and tell them, in the abstract, about the Isreal/Palestine conflict, they are in “far” mode. This situation is totally unlike having to choose which side to join in an actual fight – where your brain goes into “near” mode, and you quickly (I predict) join the likely victors. This explains the apparent contradiction between the Israel experiment and the situation in a real fight between Zug’s faction and Urk’s faction.

In a situation where there is an extremely unbalanced conflict that you are “distant” from, there are various reasons I can think of for supporting the underdog: but the common theme is that when the mind is in “far” mode, its primary purpose is to signal how nice it is, rather than to actually acquire resources. Why do we want to signal to others that we are nice people? We do this because they are more likely to cooperate with us and trust us! If evolution built a cave-man who went around telling other cave-men what a selfish bastard he was... well, that cave-man wouldn't last long. 

When people support, for example, Palestine, they don't say "I support Palestine because it is the underdog", they say "I support Palestine because they are the party with the ethical high ground, they are in the right, Israel is in the wrong". In doing so, they have signalled that they support people for ethical reasons rather than self-interested reasons. Someone who is guided by ethical principles rather than self-interest makes a better ally. Conversely, someone who supports the stronger side signalls that they are more self-interested and less concerned with ethical considerations. Admittedly, this is a signal that you can fake to some extent: there is probably a tradeoff between the probability that the winning side will punish you, and the value that supporting someone for ethical reasons carries. When the conflict is very close, the probability of you becoming involved makes the signal too expensive. When the conflict is far, the signal is almost (but not quite) free.

You also put yourself in a better bargaining position for when you meet the victorious side: you can complain that they don't really deserve all their conquest-acquired wealth because they stole it anyway. In a world where people genuinely think that they are nicer than they really are (which is, by the way, the world of humans), being able to frame someone as being the "bad guy" puts you in a position of strength when negotiating. They might make concessions to preserve their self-image. In a world where you can't lie perfectly, preserving your self-image as a nice person or a nice tribe is worth making some concessions for.

All that remains to explain is what situation in our evolutionary past corresponds to hearing about a faraway conflict (Like Israel/Palestine for westerners who don’t live there or have any true interest). This I am not sure about: perhaps it would be like hearing of a distant battle between two tribes? Or a conflict between two factions of your tribe, which occurs in such a way that you cannot take sides?

My explanation makes the prediction that if you performed a social-science experiment where people felt sufficiently close to the conflict to be personally involved, they would support the likely winner. This might involve making people very frightened and thus not pass ethics committee approval, though.

The only good experience I have with “near” tribal conflicts is my experiences at school; whenever some poor underdog was being bullied, I felt compelled to join in with the bullying, in exactly the same “automatic” way that I feel compelled to support the underdog in Far situations. I just couldn’t help myself. 

Hat-tip to Yvain for admitting he couldn’t explain this. The path to knowledge is paved with grudging admissions of your ignorance. 

 

27 comments

Comments sorted by top scores.

comment by RobinHanson · 2009-04-06T00:05:38.513Z · LW(p) · GW(p)

This makes sense; I thought I was saying the same thing when I said:

I'm not sure we do actually support the underdog more when a costly act is required, but we probably try to pretend to support the underdog when doing so is cheap, so we can look more impressive.

Replies from: Roko
comment by Roko · 2009-04-06T13:00:43.000Z · LW(p) · GW(p)

Darn... just when I thought I had outsmarted the greatest minds on LessWrong...

I didn't see that. But yes, this summarizes my argument.

comment by smoofra · 2009-04-05T22:44:26.950Z · LW(p) · GW(p)

All that remains to explain is what situation in our evolutionary past corresponds to hearing about a faraway conflict

No! You are assuming that underdogism is something evolution specifically optimized for. It could very well be a useless side effect of other unrelated optimizations.

Replies from: Roko
comment by Roko · 2009-04-05T23:03:35.026Z · LW(p) · GW(p)

Yes, it could be. But "random side effect of unspecified other optimisations" is a completely general explanation for any trait of any organism, so as a hypothesis it is totally uninformative. We should actively look for more informative hypotheses, keeping this one as a last resort or "default" to return to if nothing more informative fits the evidence. I'll vote your comment up if you can think of something more specific.

It should be taken as read that when I propose an explanation for something, I implicitly admit that I could be wrong.

Replies from: AnnaSalamon, AnnaSalamon, AndySimpson
comment by AnnaSalamon · 2009-04-06T00:54:26.039Z · LW(p) · GW(p)

Yes, it could be. But "random side effect of unspecified other optimizations" is a completely general explanation for any trait of any organism, so as a hypothesis it is totally uninformative.

The obvious "random side effect" theory, for underdog-empathy, is that we tend to empathize with agents who feel the way underdogs feel (e.g., in pain, afraid, pulling together their pluck to succeed) regardless of whether the agent who feels that way is an underdog, or feels those emotions for some other reason.

To test this theory: do we in fact feel similarly about people struggling to scale a difficult mountain, or to gather food for the winter in the face of difficulty and starvation risk? Or does our empathy with underdogs (in group conflict situations specifically; not in struggles against non-agent difficulties like mountains) bring out a response that would not be predicted from just the agent's fear/pain/pluck? Also, can our responses to more complex fear/pain/pluck situations (such as the person struggling to avoid starvation) be explained from simpler reactions to the individual components (e.g., the tendency to flinch and grab your thumb when you see someone hit his thumb with a hammer)?

Replies from: Coathangrrr
comment by Coathangrrr · 2009-04-07T00:35:39.497Z · LW(p) · GW(p)

I think this is the least wrong post here. If we assume that our pattern recognition ability, which has obvious evolutionary advantages, is the source of empathy, which makes sense to me in terms of individual selection, then we can see that looking at a far situation will trigger certain pattern recognitions, specifically looking at our past experience. Based on my past experience more gain comes from being the underdog and winning than being the assumed winner and winning. Because I can see that, emotionally I will identify with the underdog more often because the outcome will be greater for individuals in that group and people tend to identify with individuals in far populations rather than groups. I'd add that personally being a part of the underdog group and winning would have much more of an impact than being a part of the assumed winning side and winning, much like a gambler remembers the wins more than the losses, and thus I would be pulling for the underdogs.

This can explain why my reasonableness will lead me to support the overdog in close situations. If there is a split in my group and I have to choose which side I'm on pattern recognition helps me realize that I am more likely to come out ahead if I ally with the overdog.(uberhund?) Thus, in such a situation I would be more likely to support the uberhund than the underdog because it directly affects my situation.

comment by AnnaSalamon · 2009-04-06T01:38:50.222Z · LW(p) · GW(p)

We should actively look for more informative hypotheses, keeping this one as a last resort or "default" to return to if nothing more informative fits the evidence.

Roko’s heuristic (quoted above) isn’t terrible, but as LW-ers equipped with Bayes’ theorem, we can do better. Instead of betting on whichever explanation predicts the observations in most detail (the “most informative hypothesis”), we can bet on whichever explanation has the most combined predictive power and prior probability.

P(hypothesis | data ) = P(hypothesis) * P( data | hypothesis ) / P(data).

Let’s say we’re trying to decide between theory A -- “we like underdogs because underdog-liking was specifically useful to our ancestors, for such-and-such a specific reason” -- and theory B -- “underdog-liking is an accidental side-effect of other adaptations”. Roko correctly points out that P(data | hypothesis A) is larger than P(data | hypothesis B). That’s what it means to say hypothesis A is “more informative” or has “more predictive power”. (Well, that, and the fact that hypothesis A might also fit some larger set of future data that we might collect in future experiments.) But it is also true that P(hypothesis B) is much larger than P(hypothesis A). And if our goal is to estimate whether smoofra’s general hypothesis B or Roko’s specific hypothesis A is more likely to be true, we need to focus on the product.

We can estimate the relative prior probabilities of hypotheses A and B partly by thinking about how much more general B is than A (general hypotheses have higher priors) and partly by gathering data on how good an optimizer evolution is, or how often evolution generates specific adaptations vs. general side-effects. Regarding how good an optimizer evolution is, Michael Vassar likes to note that adult baboons and human toddlers have to learn how to hide; hiding is useful in a variety of situations, but its usefulness was not sufficient to cause specific “how to hide” adaptations to evolve. If similar examples of missing adaptations are common, this would increase the prior weight against hypotheses such as Roko’s near/far account of underdog-empathy. If there are plenty of clear examples of specific adaptations, that would increase weight toward Roko’s near/fear underdog theory.

Evolutionary psychology is important enough that figuring out what priors to put on specific adaptations vs. side-effects would be darn useful. Anyone have data? Better yet, anyone with data willing to write us a post, here?

Replies from: RobinHanson
comment by RobinHanson · 2009-04-06T12:14:10.582Z · LW(p) · GW(p)

Even if the functional hypothesis is less likely than the random hypothesis, there is further we can take it if we explore it. Finding structure can lead to finding more structure, letting us climb further steps up the ladder of knowledge.

Replies from: Roko
comment by Roko · 2009-04-06T13:07:09.358Z · LW(p) · GW(p)

Exactly. There is a distinction between that hypothesis that you think is most likely, and that hypothesis that you think is most worth pursuing.

comment by AndySimpson · 2009-04-06T03:17:07.423Z · LW(p) · GW(p)

Why not go with the explanation that doesn't multiply entities beyond necessity? Why should we assume that there was a specific strategic circumstance in our evolutionary past that caused us to make the near-far distinction when it could very easily --perhaps more easily --- be the side-effect of higher reasoning, a basic disposition towards kindness, or a cultural evolution? Isn't it best practice to assume the null hypothesis until there's compelling evidence of something else?

Replies from: Roko
comment by Roko · 2009-04-06T09:10:33.799Z · LW(p) · GW(p)

There's a distinction between what I believe is more likely to be true, and what I wish were true instead. The null hypothesis is always more likely to be correct than any specific hypothesis. If I have to stick with a very unpredictive hypothesis, I have a decreased ability to predict the world, I will therefore do worse.

In this case, I am fairly sure that the near/far distinction gives good reason to believe that the Israel experiment doesn't contradict the cave man fight: i.e. what people do in far situations can be the opposite of what they do in near situations.

But as to why people root for the underdog, rather than just choosing at random... I am less sure.

The empathy argument has been made independently a few times, and I am starting to see it's merit. But empathy and signalling aren't mutually exclusive. We could be seeing an example of exaption here - the empathy response tended to make people sympathize with the underdog, and this effect was re-enforced because it was actually advantageous - as a signal of virtue and power.

Replies from: Psychohistorian, Coathangrrr
comment by Psychohistorian · 2009-04-06T21:20:56.372Z · LW(p) · GW(p)

The original poster here seemed to basically be saying "This is a minor effect of such complexity that it could be entirely the result of selective pressures on other parts of human psychology, which give us this predisposition." This seems highly plausible, given that I don't think anyone has come up with a story of how decisions in this circumstance influence differential selective pressure. It seems that if you can't find a reasonably clear mechanism for differntial reproductive success, you should not bend over backwards to do so (that is, if it's that hard to find one, maybe it's because it isn't there).

My personal theory is that it stems from story telling and thus availability bias. Almost no story has the overdog as a good guy. This is probably the result of a story requiring conflict to occur in a non-predictable manner. Big, good guy crushes bad, little guy with little resistance is too foregone of a conclusion. Thus, every story we hear, we like the good guy. When we hear a story about Israel-Palestine (that happens to represent reality, roughly), we side with the little guy because, based on a massive compilation of (fictional) "evidence," the little guy is always right.

Of course, explaining the psychology of good stories is rather difficult; still, "side effect of other aspect of human psychology" seems more accurate than "result of differential reproduction" for something this specific, abstract, and practically useless. Though, of course, if someone comes up with a convincing mechanism for differential reproductive success, that would probably change my mind.

Replies from: Roko
comment by Roko · 2009-04-07T12:45:29.063Z · LW(p) · GW(p)

It seems that if you can't find a reasonably clear mechanism for differntial reproductive success, you should not bend over backwards to do so (that is, if it's that hard to find one, maybe it's because it isn't there).

You should bend over backwards until you find one or two "differential fitness" explanations, then you should go test them!

EDIT: And, of course, you should also look for hypotheses not based upon differential reproduction. And test those too!

I think that "signalling your virtue and power" isn't a crazily complex explanation. We are in need of evidence methinks.

comment by Coathangrrr · 2009-04-07T00:38:15.306Z · LW(p) · GW(p)

If I have to stick with a very unpredictive hypothesis, I have a decreased ability to predict the world, I will therefore do worse.

Not true. If you have a prediction model that is non-random and wrong you will get better results from simple random predictions.

Replies from: MBlume, Roko
comment by MBlume · 2009-04-07T00:41:19.500Z · LW(p) · GW(p)

Just so you know, you can use a greater than sign to quote text, which will look like this

quoted text

If you actually want to italicize text, you can use stars, which will look like this.

HTML will not avail you.

For more, check the help box -- whenever you're in the middle of writing a comment, it's below and to the right of the editing window.

comment by Roko · 2009-04-07T12:42:59.843Z · LW(p) · GW(p)

yes, this is true. I didn't express that very well.

What I meant was that a more specific correct hypothesis is much more useful to me than random predictions.

comment by Alan · 2009-04-06T02:12:54.789Z · LW(p) · GW(p)

Pardon the reference to Shakespeare, but I was trying to come up with a non-contemporary, non-hypothetical, well-known example of the adaptiveness of being an underdog. In Henry V you have a narrative where the king uses consciousness of the numerical inferiority of his troops to assert, i.e., signal, superior valor. In the play, one of his officers surmises that their army is outnumbered by the French by 5 to one; another wishes out loud that another 10,000 could be added to their number. Henry dismisses this talk, declaring: "The fewer men, the greater share of honour. God's will! I pray thee, wish not one man more."

Unlike the cave-men who joined Urk's faction, many of those who didn't shrink from batle in Henry's army would NOT have been wiped out. To the contrary, they would have succeeded disproportionately well in evolutionary terms. Might their tribal social emotions go some distance in suggesting why? Stripped of context, odds are merely quantitative, and necessarily leave out important information on qualitative dimensions, such as social emotions. Assume a supporter of Urk gets excited enough over the prospect of sharing the spoils of Zug and reputational rewards that would accrue to him. We might admit that it is not foreordained that Urk would lose the contest. Not only are supporters of the underdog not necessarily wiped out; when they win, as Shakespeare's Henry intuited, they stand to gain a greater share of the glory, and hence evolutionary fitness.

comment by Gordon Seidoh Worley (gworley) · 2009-04-06T19:28:14.942Z · LW(p) · GW(p)

This is a nice try, but it still lacks what is most important in any evolutionary explanation: the source of differential reproduction. Looking at the comments on both posts I still don't see any reason to believe that the underdog bias, which I do think exists, resulted in differential reproduction. You push this off on the near-far distinction, so the question remains: why should the near/far distinction have evolved?

Replies from: Roko
comment by Roko · 2009-04-07T23:08:35.245Z · LW(p) · GW(p)

the source of differential reproduction.

Ok, so let me state my argument maximally clearly. People who can convince others that they make good, trustworthy allies will find it easier to make alliances. This is beyond reasonable doubt - it is why we are so concerned with attributing motives to people, with analyzing people's characters, with sorting people into ethical categories (she is a liar, he is brave, etc).

If there is a cheap but not completely free way to signal that you have the disposition of a moral, trustworthy person, for example by rooting for the underdog in faraway conflicts, then we should expect people to have the trait of displaying that signal.

All that remains is to conclude that rooting for the underdog rather then the overdog really does signal to others that you are a good person, who is more likely than average to side with he who has the moral high ground rather than he who has most power. In the case that the human brain could lie perfectly, rooting for the underdog in a faraway conflict would carry no information about what you will do in a near conflict. But the human brain somehow didn't manage to be a maximally efficient lying machievelli, so he who displays moral opinions about faraway conflicts presumably behaves at least a little more ethically in near conflicts.

The mechanism here is that there is a weak connection between what we say about Israel/Palestine [the faraway conflict], and how we behave in our personal lives [the nearby conflict]. My experiences with people who are e.g. pro-Palestine bears this out - they tend to be that Gaurdian-reading almost hippie type, who might be a little fuzzy headed but are probably more likely to help a stranger. This weak connection means that you can do inference about someone's behavior in a near situation by what they say about a far situation. The survival advantage of claiming to support the underdog follows.

Another possible mechanism is that by supporting the underdog, you put yourself slightly at risk [the overdog won't like you] so this is a costly signal of strength. This I find a little less convincing, but still worth considering.

Replies from: Chesterton, gworley
comment by Chesterton · 2009-04-11T16:32:32.842Z · LW(p) · GW(p)

"If there is a cheap but not completely free way to signal that you have the disposition of a moral, trustworthy person, for example by rooting for the underdog in faraway conflicts, then we should expect people to have the trait of displaying that signal."

During the time span when the underdog tendency was presumably evolving, I doubt that there was any awareness of far-away conflicts that didn't touch the observer. Awareness of geographically distant conflicts is a relatively modern phenomenon.

Here is an alternative explanation. The inclination to protect the weak from harm provides reproductive advantage--parents protect their young who go on to reproduce. This tendency is thoroughly bound up with empathic responses to distress signals from the weak and defenseless. It's the default position.

This strategy works up to the point when the aggressor poses an overwhelming threat. Challenging that big silverback when he's killing someone else's young could buy you a heap of trouble--better to form an alliance for one's own safety and the safety of one's young who can go on to reproduce if they survive). So, when the cost is low we're inclined to feel empathy for the weak--it's the default position. But when the threat is more immediate and overwhelming, we identify with and seek alliances with the aggressor. Nothing about signaling our moral standing to others is necessary in this formulation.

comment by Gordon Seidoh Worley (gworley) · 2009-04-08T21:05:50.531Z · LW(p) · GW(p)

Clearer, but I remain unconvinced. The value of this signal, as you present it, seems to provide so little real benefit that I can't distinguish it from the noise of random mutation.

comment by LeighCaldwell · 2009-04-06T15:41:35.014Z · LW(p) · GW(p)

To me, this is explained by the idea that we will be competing, not cooperating, with the ultimate winner.

If I am observing the contest and not participating, I would want the weaker party to win so that the remaining population dominates me less. If David gets lucky and beats Goliath, I only have to compete with David in future contests - if Goliath wins, I may have to go up against him next time.

Intuitively, this seems to explain the tendency quite well - a political victory by, let's say, Canada over the US, feels like it would "take them down a peg" and reduce the power imbalance between the UK (where I am) and the US. Equally a defeat of Roger Federer by a low-ranked player reduces my feelings of inferiority compared to the "superhuman" Federer.

Of course this would be quite different if I were entering a doubles contest - I'd much rather be Federer's partner - or choosing sides in the US-Canadian war. I don't think the underdog effect survives if I'm actually involved in the fight.

comment by John_Maxwell (John_Maxwell_IV) · 2012-05-12T05:04:32.446Z · LW(p) · GW(p)

My explanation makes the prediction that if you performed a social-science experiment where people felt sufficiently close to the conflict to be personally involved, they would support the likely winner. This might involve making people very frightened and thus not pass ethics committee approval, though.

When I was in elementary school playing pickup basketball games, it seemed to me that people preferred to play on whichever team was stronger, which often resulted in wildly unbalanced teams. Perhaps this tendency could be studied?

comment by conchis · 2009-04-05T20:47:29.164Z · LW(p) · GW(p)

I think there's something here, but I still don't find the link from the far view to underdog support entirely convincing.

Why do we think why would we want to signal niceness in the far view? And why should signaling "niceness" involve supporting the underdog? (I mean this question in the evolutionary sense: why would our intutions of niceness correspond to supporting underdogs, and why would we value that signal?) And why would it allow us to portray ourselves as steadfast allies, if we don't actually have any connection to either of the parties involved?

Two alternative (not necessarily much more convincing) accounts, both based on the idea that in the far view, we don't need to think about the costs of our opinions so much, but which are otherwise somewhat incompatible with each other:

(1) If we know we won't actually have to do anything, supporting the weaker party is a cheap signal of strength. Only the strong would back a loser. (Maybe this supports your account, by giving a reason to signal "niceness"?)

(2) If we know we don't actually have to get involved, we just go with whoever we actually have the most sympathy for. (This is like your "niceness" explanation, but sans signaling.) Two versions:

(a) Most people just have more sympathy for weaker parties. Why? I don't know. (Not much of an explanation then is it?)

(b) People sympathize with the side most similar to them. (This would predict that the strong will generally have less of an underdog bias, and that if the underdog is sufficiently weak, then average types may switch to backing the favorite. This doesn't seem entirely implausible, but I have no idea whether it's contradicted by any evidence out there already.)

Replies from: jimrandomh, Roko
comment by jimrandomh · 2009-04-05T21:42:18.454Z · LW(p) · GW(p)

Why do we think why would we want to signal niceness in the far view? And why should signaling "niceness" involve supporting the underdog?

If you think that the stronger side is morally superior, your decision is trivial, and doesn't signal anything. If one side is stronger while the other is morally superior, then supporting the strong side would be choosing individual best interest over communal best interest, which is like defecting in the prisoner's dilemma.

comment by Roko · 2009-04-05T21:06:57.529Z · LW(p) · GW(p)

Thanks, I've edited the post to take account of some of these criticisms.

(1) If we know we won't actually have to do anything, supporting the weaker party is a cheap signal of strength. Only the strong would back a loser. (Maybe this supports your account, by giving a reason to signal "niceness"?)

People want to signal that they are both nice and strong. The edited post explains why we want to signal that we are nice.

(2) If we know we don't actually have to get involved, we just go with whoever we actually have the most sympathy for. (This is like your "niceness" explanation, but sans signaling.) Two versions:

I don't like this as much; rather than using evolutionary theory, it attributes the effect to something happening pretty much at random, i.e. there is no survival advantage to sympathizing or to just randomly supporting weak parties. But evolution is not a perfect optimizer, so this could be true. If this explanation were true, we would expect some racially and/or culturally distinct groups to support the stronger party. If this trait is universal in humans, it could be a fixed neutral mutation, but that would strike me as suspicious.

Lastly, I'll admit that I am more sure that I have explained why the results don't contradict each other than why they are the way they are. I would not be that surprised if Yvain's post had lied about the underdog bias.

comment by gjm · 2009-04-07T00:30:15.542Z · LW(p) · GW(p)

Is the near/far distinction actually Robin Hanson's? In his OB post about this, he links to an article in Science, but unfortunately it's behind a paywall so it's hard to tell how Robin's comments divide up into (1) stuff clearly present in the original article, (2) Robin's interpretation of stuff in the article, and (3) original contributions from Robin.

R.H., if you're reading this, would you care to comment?