Relative and Absolute Benefit
post by jefftk (jkaufman) · 2014-06-18T13:56:19.437Z · LW · GW · Legacy · 14 commentsContents
14 comments
Someone comes to you claiming to have an intervention that dramatically improves life outcomes. They tell you that all people have some level of X, determined by a mixture of genetics and biology, and they show you evidence that their intervention is cheap and effective at increasing X and separately that higher levels of X are correlated with greater life success. You're skeptical, so they show you there's a strong dose response effect, but you're still not happy about the correlational nature of their evidence. So they go off and do a randomized controlled trial, applying their intervention to randomly chosen individuals and comparing their outcomes with people who aren't supplied the intervention. The improvement still shows up, and with a large effect size!
What's missing is evidence that the intervention helps people in an absolute sense, instead of simply by improving their relative social position. For example, say X is height, we're just looking at men, and we're getting them to wear lifts in their shoes. While taller men do earn more, and are generally more successful along various metrics, we don't think this is because being taller makes you smarter, healthier, or more conscientious. If all people became 1" taller it would be very inconvenient but we wouldn't expect this to affect people's life outcomes very much.
Attributes like X are also weird because they put parents in a strange position. If you're mostly but not completely altruistic you might want more X for your own child but think that campaigns to give X to other people's children are not useful: if X is just about relative position then for every person you "bring up" that way other people are slightly brought down in a way that balances the overall outcome to "basically no effect".
College degrees, especially in fields that don't directly teach skills in demand by employers, may belong in this category. Employers hire college graduates over highschool graduates, and this hiring advantage does remain as you increase college enrollment, but if another 10% of people get English degrees is everyone better off in agreggate?
Some interventions are pretty clearly not in this category. If an operation saves someone's life or cures them of something painful they're pretty clearly better off. The difference here is we have an absolute measurement of well-being, in this case "how healthy are you?", and we can see this remaining constant in the control group. Unfortunately, this isn't always enough: if our intervention was "take $1 from 10k randomly selected people and give that $10k it to one randomly selected persion" we would see that the person gaining $10k was better off but not be able to see any harm to the other people because the change in their situation was too small to measure with our tests. Because each additional dollar is less valuable, however, we would expect this transfer to make the group as a whole worse off. So "absolute measures of wellbeing apparently remaining constant in the control group" isn't enough.
How do we get around this? While we can't run an experiment with half the world's people as "treatment" and the other half as "control", one thing we can do is look at isolated groups where we really can apply the intervention to a large fraction of the people. Take the height example. If instead we were to randomly make half the people in a treatment population 1/2" taller, and this treatment population was embedded in a much larger society, the positional losses in the non-treatment group would be too diffuse to measure. But if we limit to one small community with limited churn and apply the treatment to half the people, then if (as I expect) it's entirely a relative benefit we should see the control group do worse on absolute measurements of wellbeing.
Another way to avoid interventions that mostly give positional benefit is to keep mechanisms in mind. Height increase has no plausible mechanism for improving absolute wellbeing, while focused skills training does. This isn't ideal, because you can have non-intuitive mechanisms or miss the main way an intervention leads to your measured outcome, but it can still catch some of these.
What else can we do?
I also posted this on my blog.
14 comments
Comments sorted by top scores.
comment by benkuhn · 2014-06-19T02:55:30.860Z · LW(p) · GW(p)
I think you're being a little uncharitable to people who promote interventions that seem positional (e.g. greater educational attainment). It may be true that college degrees are purely for signalling and hence positional goods, but:
(a) it improves aggregate welfare for people to be able to send costly signals, so we shouldn't just get rid of college degrees;
(b) if an intervention improves college graduation rate, it (hopefully) is not doing this by handing out free diplomas, but rather by effecting some change in the subjects that makes them more capable of sending the costly signal of graduating from college, which is an absolute improvement.
Similarly, while height increase has no plausible mechanism for improving absolute wellbeing, some mechanisms for improving absolute wellbeing are measured using height as a proxy (most prominently nutritional status in developing countries).
It should definitely be a warning sign if an intervention seems only to promote a positional good, but it's more complex than it seems to determine what's actually positional.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2014-06-19T15:21:26.506Z · LW(p) · GW(p)
"effecting some change in the subjects that makes them more capable of sending the costly signal of graduating from college, which is an absolute improvement"
It depends. Consider a government subsidy for college tuition. This increases the number of people who go to and then graduate college, but it also makes the signal less costly.
But I basically agree with "it's more complex than it seems to determine what's actually positional". The difficulty of determining how much of an observed benefit is absolute vs positional is a lot of what I'm talking about here.
comment by sixes_and_sevens · 2014-06-18T15:17:31.582Z · LW(p) · GW(p)
By curious coincidence I've been reading about positional goods elsewhere this week, and thinking along similar lines.
Are there any positional goods that aren't reasonably well-captured as signalling? There are various conditions that need to be in place in order for a signal to be of value, so if positional goods are principally a case of signalling, such conditions could offer some indication as to whether an intervention provides positional or intrinsic value.
ETA: I've just had a flip through the Wikipedia article for positional goods, and the "see also" section includes a link to Narcissistic Personality Disorder. There is no explanation on the talk page.
Replies from: Lumifer, jkaufman↑ comment by Lumifer · 2014-06-18T16:22:33.194Z · LW(p) · GW(p)
Are there any positional goods that aren't reasonably well-captured as signalling?
Depends on whether you count signaling to yourself as signaling.
There are cases of rich people buying stolen art (and other collectables) that they would never be able to publicly admit owning. But presumably the ownership of that rare and hidden art piece warms the cockles of their hearts...
Replies from: jkaufman, sixes_and_sevens↑ comment by jefftk (jkaufman) · 2014-06-18T16:42:50.905Z · LW(p) · GW(p)
Can't they show off their stolen goods to particular other people, in confidence, indicating something like "I am so, rich and ruthless that I have this amazing piece of stolen artwork, and I trust you enough to let you in on this secret even though you could destroy me with it"?
↑ comment by sixes_and_sevens · 2014-06-18T16:48:21.703Z · LW(p) · GW(p)
That's an interesting case. I can readily appreciate the idea of secretly owning a great work of art for my own private consumption, but that seems to have a fair amount of intrinsic value as well.
↑ comment by jefftk (jkaufman) · 2014-06-18T16:44:31.834Z · LW(p) · GW(p)
Consider the "take $1 from each of 10k people at random and give it all to another person chosen at random" example. The benefit there seems to be relative/positional but it's not a case of signaling.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2014-06-18T17:00:40.234Z · LW(p) · GW(p)
The good in question is presumably money in this case, and I can see an (abstruse) argument for money-as-signalling.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-06-19T13:31:07.057Z · LW(p) · GW(p)
Then substitute something directly useful.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2014-06-20T09:20:38.453Z · LW(p) · GW(p)
While I'm not in a position to iron this out right now, this line of reasoning suggests any good is a positional good if it's redistributed, and I'm pretty sure that doesn't fly.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-06-20T14:33:29.309Z · LW(p) · GW(p)
Agreed.
comment by somervta · 2014-06-19T03:27:25.657Z · LW(p) · GW(p)
Because each additional dollar is less valuable, however, we would expect this transfer to make the group as a whole worse off.
grumble grumble only if the people the money went from were drawn from the same or similar distribution as the person it goes to.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2014-06-19T15:15:10.237Z · LW(p) · GW(p)
only if the people the money went from were drawn from the same or similar distribution as the person it goes to
I wrote "take $1 from 10k randomly selected people and give that $10k it to one randomly selected person". Reading it back this implies you use the same distribution for both selections, but it sounds like that's not how you read it? How would you phrase this idea differently?
Replies from: somervta↑ comment by somervta · 2014-06-19T19:44:36.474Z · LW(p) · GW(p)
Ooops, I actually didn't mean to post that! Usually when I'm making an obvious criticism, after I write it I go back and double-check that I haven't missed or misinterpreted something, and I noticed that and meant to delete the unposted comment. I guess I must have hit enter at some point.