Inefficient Doesn’t Mean Indifferent

post by Zvi · 2018-04-29T11:30:01.467Z · LW · GW · 17 comments

Many people, including Bryan Caplan and Robin Hanson, use the following form of argument a lot. It could be considered the central principle of the (excellent) The Elephant in the Brain. It goes something like:

  1. People say they want X, and they do Y to get it.
  2. If people did C, they would get X, and the price of C is cheap!
  3. Therefore, people really value X at less than the price of C, so they don’t really care much about X.

There’s something very perverse going on here. We’re using people trying to get X in an inefficient way as evidence they don’t care about X, rather than as evidence that people aren’t efficient.

The trick is, there’s a lot of assumptions hidden in the above logic. In practice, they rarely hold outside of simple cases (e.g. consumption goods).

The motivating example was Bryan Caplan using this one in The Case Against Education:

  1. People say they want smart employees, and look at school records to get it.
  2. If people gave out IQ tests, they would get smart employees, and testing is cheap!
  3. Therefore, people don’t really value smart employees.

In that case, I agree. Employers (most often) don’t want smart employees beyond a threshold requirement. But local validity is vital [LW · GW], and you can’t do that.

There are lots of reasons why one might not want to do C.

As a minimal first step, people have to believe that strategy C would work. A recent example of Robin Hanson using this technique, that violates that requirement, from How Best Help Distant Future?, could be summarized this way:

  1. People say they want to help the future, and lobby for policies they think help.
  2. If people saved money to help the far future, which they almost never do, they could help more, and since you get real returns from it, it’s really cheap!
  3. Therefore, people don’t much care about the far future.

In that case, I strongly disagree. People rightfully do not have faith that saving money now to help the far future will result in the far future being helped. Perhaps it would, but there’s a lot of assumptions that case relies upon, many of which most folks disagree with – about when money will have how much impact (especially if you expect a singularity to happen), about what you can expect real returns to be especially in the worlds that need help most, about whether that money is likely to be confiscated, about whether the money if not confiscated would actually get spent in a useful way when the time comes, about what that spend will then crowd out, about whether that savings represents the creation or saving of real resources, about what virtues and habits such actions cultivate, and so forth.

(I don’t think that saving and investing money to spend in the far future is obviously a good or bad way to impact the far future.)

More generally, human actions accomplish, signal and cost many things. A lot of considerations go into our decisions, including seemingly trivial inconveniences. One should never assume that a given option is open to people, or that they know about it, or that they’re confident it would work, or that they’re confident it wouldn’t have hidden costs, or that it doesn’t carry actual large costs you don’t realize, and so forth.

The argument depends on the assumption that humans are maximizing. They’re not. Humans are not automatically strategic [LW · GW]. The standard reaction to ‘I actually really, really do want to help the far future’ is not to take exactly those actions that maximize far future impact. The standard reaction to ‘I actually really, really care about hiring the smartest employees’ is not to give them an IQ test because that would be mildly socially awkward and carries unknown risks. Because people, to a first approximation, don’t do things, and certainly don’t do things that aren’t typically done.

If something is mildly socially awkward or carries unknown risks, or just isn’t the normal thing to do (and thus, might involve the things above on priors), it probably won’t happen, even if it would get people something they care a lot about.

So if I see you not maximizing far future impact, and accuse you of not caring much about the far future, a reasonable response would be that people don’t actually maximize much of anything. Another would be, I care about many other things too, and I’m helping, so get off your damn high horse.

A very toxic counter-argument to that is to treat all considerations as fungible and translatable to utility or dollars, again assume maximization, and assert this proves you ‘don’t really care’ about X.

An extreme version of this, to (possibly uncharitably, I’m not sure) paraphrase of part of a post by Gwern on Denmark:

  1. Denmark helps the people of Greenland via subsidy.
  2. Helping people in Greenland is expensive. Denmark could help many more people if it instead helped other people with that money.
  3. Therefore, Danish people are moral monsters.

This is a general (often implicit, occasionally explicit) argument that seems like a version of the Copenhagen Interpretation of Ethics: If you help anyone anywhere, you are blameworthy, because you could have spent more resources helping, but even more so because you could have spent those resources more effectively. So you’re terrible. You clearly don’t care about helping people – in fact, you are bad and you should feel bad, worse than if you never helped people at all. At least then you wouldn’t be a damned hypocrite.

This threatens to indict everyone for almost every action they take. It is incompatible with civilization, with freedom, or with living life as a human. And isn’t true. So please, please, stop it.

17 comments

Comments sorted by top scores.

comment by gwern · 2018-04-29T20:15:27.775Z · LW(p) · GW(p)
An extreme version of this, to (possibly uncharitably, I’m not sure) paraphrase of part of a post by Gwern on Denmark:

I would phrase it more as, 'one or two people have tried to argue that subsidizing Greenland is charity; however, this is one of the most inefficient possible charities which does the least good without being outright harmful, and if Denmark really thought this was the best charity for it to do, it is made of either fools or knaves; of course, it is not, because subsidizing Greenland has nothing whatsoever to do with charity and they are making that up out of whole cloth'. Modus tollens, basically.

comment by Dacyn · 2018-04-29T11:58:26.856Z · LW(p) · GW(p)

I think Robin uses the word "want" pretty differently from most people (IIRC he's said explicitly it doesn't necessarily have anything to do with conscious thought processes). You just have to get used to his way of talking. (He also appears to have some strange beliefs about how it is a good thing to give people the things that they "want" in his sense, but I think that argument would have to be had on its own terms.)

Replies from: Zvi
comment by Zvi · 2018-04-29T12:33:08.985Z · LW(p) · GW(p)

I agree that he's using it differently, but I think I'm used to it, and I think all the logic still applies. When economists talk about revealed preference in general I think they're using it mostly the way that Robin is, and I often use it that way as well.

Replies from: Dacyn
comment by Dacyn · 2018-04-29T13:59:35.206Z · LW(p) · GW(p)

Isn't the point of the revealed preference framework to try to view humans as expected-utility-maximizers, even if it's not possible to do this perfectly? So it seems to me you can't object that humans aren't actually expected-utility-maximizers, while remaining in that framework. In particular it seems to me that in the sense of revealed preference, people find the social costs of IQ tests to be higher than their benefits, and the costs of learning more about whether far future investment would be a good idea to be higher than its benefits.

Replies from: Hazard, Zvi
comment by Hazard · 2018-04-29T17:35:08.586Z · LW(p) · GW(p)

Using revealed preferences to treat people as expected-utility maximizers seems to drop some very important information about people.

I'm imagining a multiplayer game that has settled into a bad-equilibrium, and there are multiple superior equilibrium points, but they are far away. If we looked at the revealed preferences of all of the actors involved, it would probably look like everyone "prefers" to be in the bad-equilibrium.

If your thinking about how to intervene on this game, the revealed preferences frame results in "No work to be done here, people are all doing what they actually care about." Where as if you asked the actors what they wanted, you might learn something about superior equilibriums that everybody would prefer.

Replies from: Dacyn
comment by Dacyn · 2018-04-29T22:41:12.757Z · LW(p) · GW(p)

In the revealed preference framework it doesn't look like people "prefer" to be in the bad equilibrium, since no one has the choice between the bad equilibrium and a better equilibrium. The only way the revealed preference framework could compare two different equilibria is by extrapolation: figure out what people value based on the choices they make when they are in control, and then figure out which of the two equilibria is ranked higher according to those revealed values. Of course this may or may not be possible in any given circumstance, just like it may or may not be possible to get good answers by asking people.

I think the revealed preference frame is more useful if you don't phrase it as "this is what people actually care about" but rather "this is what actually motivates people". People can care about things that they aren't much motivated by, and be motivated by things they don't much care about (e.g. the lotus thread). In that interpretation, I don't think it makes sense to criticize revealed preference for not taking into account all information about what people care about, since that's not what it's trying to measure.

Replies from: Hazard
comment by Hazard · 2018-04-30T23:15:37.547Z · LW(p) · GW(p)

Okay, yeah, using the revealed preference framework doesn't inherently lead to not being able to differentiate between equilibrium. In my head, I was comparing seeing the "true payoff matrix" to a revealed preference investigation, when I should have been comparing it to "ask people what the payoff matrix looks like".

There still come to find several counterproductive whys I can imagine someone claiming, "People don't actually care about X", but I no longer think that's specifically a problem of the revealed preference frame.

comment by Zvi · 2018-05-01T20:16:34.121Z · LW(p) · GW(p)

Taken completely and literally, yes, the framework posits that humans are utility maximizers. The problem of course is that they aren't. But that doesn't mean I can't use revealed preference strategies to find out information, it's quite useful - sort of a fake framework, sort of not. And doing so doesn't require me to actually make the mistake of treating people as utility maximizers, and especially of doing so once we pop out of the framework. That's a trap.

Noticing that humans have a choice to do X or Y (or to do or not do X), and that they usually do X, is great information, and that's a big insight and a useful technique. So when we do what you did here, and word our findings carefully, we find we can extract useful information - they've decided that X is or isn't a good idea. But the difference between your very good wording, of the costs of learning more about whether far future investment would be a good idea, rather than the actual costs of future investment, is key here, same with recognizing that IQ tests have social costs rather than direct financial costs as the main barrier. These are mistakes I think Hanson and Caplan do make in their post/book.

Replies from: Dacyn
comment by Dacyn · 2018-05-02T09:00:33.901Z · LW(p) · GW(p)

OK, that makes sense. Though I want to be clear, I don't think it's obvious that most people think the costs of learning more about whether far future investment would be a good idea are higher than its benefits, I think most people just don't think about the issue and aren't motivated to. So the revealed preference framework is useful, but it tells you more about what motivates people than about what they care about.

comment by Adam Zerner (adamzerner) · 2018-05-01T02:33:07.060Z · LW(p) · GW(p)

I suspect that it would be useful to distinguish between "zoomed out" inefficiency, and "zoomed in" inefficiency. When you zoom in and observe the fact that employers don't give out IQ tests, I agree that we shouldn't draw any conclusions. But if you zoom out and observe the fact that IQ tests have been around for 100 years and still aren't being used by employers, I think that really starts to point in the direction of indifference. (I'm not confident at all that IQ tests are a good example, but hopefully it communicates the more general point I'm trying to make.)

I say this because in the long run, the Invisible Hand does seem to exist. Ie. in the long run, humans do move towards efficiency. And so if humans aren't moving towards efficiency over a long period of time, it would follow that they might not really care about the thing. I don't think such a long run observation proves anything - there are still other possible explanations - but it does seem to me to point in the direction of indifference pretty hard.

Replies from: Zvi
comment by Zvi · 2018-05-01T20:26:13.516Z · LW(p) · GW(p)

I agree it's a messy example, I used it more because it was the motivating example than because it was the best illustration; this was originally from the post about IQ tests in general and got split out since I realized I had an important general point and didn't want it to be lost.

I do agree that over time, the failure of anyone to do X, where X is an established, known, easy-to-replicate, clearly does what it means to do at a reasonable price procedure, is much stronger evidence than local failure to X. No question it points towards indifference! But for more discussion of the details in this case, which explain my full opinion on the case, see the original post [LW · GW]this split off from.

Replies from: agc
comment by agc · 2018-05-02T14:55:49.080Z · LW(p) · GW(p)

Maybe this is a good example.

If we were willing to admit the students who would benefit most by objective criteria like income or career success, we could use prediction markets. The complete lack of interest in this suggests that isn’t really the agenda.

Robin is saying that lack of interest in using prediction markets for student admissions shows that universities don't actually want the best students. I can think of many other possible explanations:

  • They have never heard of prediction markets. It's a fairly obscure concept.
  • They don't believe it would work.
  • They have moral issues with letting strangers bets decide if someone gets admitted, or they think it could be manipulated.
  • There are dozens of other strange methods that someone thinks would solve all their problems.
comment by Dagon · 2018-04-30T19:48:58.472Z · LW(p) · GW(p)

Since "caring" is relative within each individual, "don't care about X" can losslessly transform to "Care about non-X more than is claimed". Many people claim to care a lot about intelligence or knowledge from education. Since there are cheaper (and more effective) ways to test for that, it does seem very clear that there are other factors in education which people care about. Not "don't care about intelligence or knowledge", but "don't care as much as claimed about intelligence or knowledge".

In fact, people care a lot about conformity and willingness to participate in ritual. They just don't like to say so out loud.

Replies from: Zvi
comment by Zvi · 2018-05-01T20:22:51.385Z · LW(p) · GW(p)

I don't think that's how humans work. We don't have a fixed pool of caring like in this video from my childhood , we instead have things that we care about. Some people care more, some less. There's certainly some interaction here, if I start caring about Y a lot I am likely to care about X less, and certainly likely to devote less resources to X, but I think it's closer to floating sum than zero sum, and you can't interchange in this way.

Also, note that 'don't care about X' is a much stronger claim than 'care about non-X more than claimed.' And that in realistic context they do not mean the same thing. People often try to make this transformation to make people look worse (e.g. 'I care about the rule of law' becomes 'you don't care about the victims'), and it's very bad.

comment by SatvikBeri · 2019-10-18T14:55:31.766Z · LW(p) · GW(p)

Nit: giving IQ tests is not super cheap, because it puts companies at a nebulous risk of being sued for disparate impact (see e.g. https://en.wikipedia.org/wiki/Griggs_v._Duke_Power_Co.).

I agree with all the major conclusions though.

Replies from: Vaniver
comment by Vaniver · 2019-10-18T18:17:25.639Z · LW(p) · GW(p)

For a long time, this was my impression as well, but Caplan claims the evidence doesn't bear this out. And many organizations do use IQ testing successfully; the military is a prime example.

comment by Ben Pace (Benito) · 2018-04-29T11:47:02.563Z · LW(p) · GW(p)

Promoted to frontpage.