Posts
Comments
The point is that these are two very different kinds of valuable advisers, but the distinction is often missed.
And while I do think in real life there is something of a dichotomy between "people whose final judgment I trust on questions like X," and "people whose final judgment I don't trust but who I still want to hear what they have to say," I think a similar point could be made with more than two categories.
I think other commenters have had a similar idea, but here's one way to say it.
It seems to me that the proposition you are attacking is not exactly the one you think you are attacking. I think you think you are attacking the proposition "charitable donations should be directed to the highest EV use, regardless of the uncertainty around the EV, as long as the EV estimate is unbiased," when the proposition you are really attacking is "the analysis generating some of these very uncertain, but very high EV effect estimates is flawed, and the true EVs are in fact a great deal lower than those people claim."
The question of whether we should always be risk neutral with respect to the number of lives saved by charity is an interesting and difficult one (one that I would be interested to know what Holden thinks about). But this post is not about that difficult philosophical question, but simply about the technical question of whether the EV estimates that various people are basing themselves on are any good.
I take your point to be that I have taken something that is really a matter of degree "how much weight should I give to the opinion of this or that person?" and turned it into something dichotomous "is this person a good guide for me or not?" Here is my response.
Let's start by imagining that you can only hear the final opinions of the different advisers, and cannot discuss with them their arguments or evidence relating to the different elements of the question. In that case, the final opinion of anyone who has a valuable insight into any element of the question ought, all else equal, to move the needle for you at least a little bit. But it may be only a very little bit, and it can perfectly well be zero. In the Chicago School example, there are categories of firm conduct that Chicago people think should basically always be permitted, but that people like me think should sometimes be permitted and sometimes not. In those cases, merely knowing the final opinion of a Chicago person about a particular variety of conduct helps me not at all. Another way to get from "X's opinion should move you a very little bit" to "X's opinion should move you not at all" by throwing in a small fixed cost of the consultation.
But regardless of whether the adviser's final opinion moves you not at all or a little bit, the point is that there can still be a lot of value in hearing their arguments/evidence on particular points. If there are things that firms do that Chicago people think should always be permitted, but that people like me think should sometimes be permitted and sometimes not, the arguments/evidence of Chicago people on specific elements of the question might be of great value in helping me figure out which ones are which. And so the original point remains: the existence of valuable advisers who are not good guides.
I don't think this phenomenon is rare at all. I think in economics alone there are many insights in many fields that are now regarded to be extremely valuable for clarifying elements of big questions, but whose originators combined those insights with a great deal of wrongness, arrived at wrong final conclusions, and nevertheless were heeded on the final questions based on their insight into the specific elements.
Nice. I particularly liked the dig at Rooseveltian machismo. Of course it's possible to go too far in the opposite direction too. Some related thoughts here.
There is a neat paper on this by Feltovich, Harbaugh, and To called "Too Cool for School? Signaling and Countersignaling."
A Hassidic Jew was willing to eat pumpkin bread baked in your kitchen?
I see ciphergoth already made my first point. Sorry about that.
I vote for malice with regard to Katrina. It's not that there were political gains to be had from that particular disaster happening but the then-government decided to let it happen anyway out of malice. It's that their generally malicious political ideology was on balance a very successful one, but had as one of its weaknesses that it sometimes led to this kind of politically-harmful disaster.
Landsburg's argument is sound, and I mostly follow it and occasionally try to sell others on it. But I can think of one exception, which is if the political power of an organization that you support depends on the number of members it has. So for example I pay membership dues to one organization that is not my main charity because I want them to be able to claim one more member.
And there is one place that I think Landsburg gets it plain wrong. He says*:
To please diverse stockholders, corporations tend to diversify their giving, often through the United Way. For individuals, by contrast, it really is quite impossible to justify that level of diversity. Surely among the hundreds of United Way recipients there are some you consider worthier than others. That means you can target your charity more effectively by bypassing the United Way.
But if you think that the United Way comes tolerably close to sharing your values, but you think that they have better information than you do about relative needs and competencies across different organizations, then it makes perfect sense to donate to them, doesn't it?
It's a bit off topic, but I've been meaning to ask Eliezer this for a while. I think I get the basic logic behind "FOOM." If a brain as smart as ours could evolve from pretty much nothing, then it seems likely that sooner or later (and I have not the slightest idea whether it will be sooner or later) we should be able to use the smarts we have to design a mind that is smarter. And if we can make a mind smarter than ours, it seems likely that that mind should be able to make one smarter than it, and so on. And this process should be pretty explosive, at least for a while, so that in pretty short order the smartest minds around will be way more than a match for us, which is why it's so important that it be baked into the process from the beginning that it proceed in a way that we will regard as friendly.
But it seems to me that this qualitative argument works equally well whether "explosive" means "box in someone's basement to Unchallenged Lord and Master of the Universe Forever and Ever" before anyone else knows about it or can do anything about it, or it means "different people/groups will innovate and and borrow/steal each others' innovations over a period of many years, at the end of which where we end up will depend only a little on the contribution of the people who started the ball rolling." And if that's right, doesn't it follow that what really matters is not the correctness of the FOOM argument (which seems correct to me), but rather the estimate of how big the exponent is in the exponential growth is likely to be? Is this (and much of your disagreement with Robin Hanson) just a disagreement over an estimate of a number? Does that disagreement stand any chance of being anywhere near resolved with available evidence?
I don't think I understand. The hypothesis says that we evolved to find human babies cute because people who find babies cute are more likely to take care of them and then they'll reproduce and propagate those genes. I guess there's no strong reason why that necessarily means that we have to find human babies cuter than anything else: if the "appreciating cuteness" faculty happened for some random reason to glom extra hard onto bunnies there probably there wouldn't be any very strong selective pressure against it (though as Alicorn points out, there would probably be some slight pressure). Is that what you mean?
What does the hypothesis predict?
There is a very nice and closely related paper with the excellent title "Too Cool for School? Signaling and Countersignaling," summarized here:
http://www.zhongwen.com/cs/index.html
Somebody I know once conjectured that this story might not work if there were a continuum of types instead of discrete types. I don't know if anyone has ever worked that out.
In the original post, I said regarding our sole conversation on the subject that I can reall:
"I asked Robin Hanson about this once at lunch a few years ago, and we had an interesting chat about it, along with some other George Mason folks. I won't try to summarize everyone's positions here (I'd feel obligated to ask their permission before I'd even try), but suffice it to say that I don't think he foreswore the Hayekian idea entirely as an argument in favor of prediction markets."
Then I added:
"And there is a quote by him here that seems to embrace it. In any case, I'd be interested to know what he thinks about it."
That sentence had a link to an article that contained the quote in my comment above. Taken all together, does that really strike you as me "attibuting overly strong claims" to you? It sure doesn't strike me that way. I pointed out that you've said some things that seem to lean in a pretty "Hayekian" direction, explicitly acknowledged that I'm not certain to what extent you are making those kinds of claims on behalf of prediction markets (I'm still not), and asking you to clarify. If you think that's a debating tactic, then you and I have very different ideas of what a debating tactic is.
In the absence of decisive empirical evidence, the amount of credence to give to a particular prediction method (whether absolutely or relative to alternative methods) depends in part on what theoretical claims are being made for it, and on how well-supported those claims are. The articles that I linked to don't just say that prediction markets do reasonbly well in aggregating information, or that they do better than the alternatives. Rather, they make explicit reference to Hayek's famous argument which, as I understand it, involves a strong claim of incorporating lots of tiny pieces of information held locally by individuals. And without leaning too hard on a single quote, one of those articles seems to have you agreeing:
Hanson has suggested that prediction markets “can be used to aggregate information from any [italics added] given set of participants.”
I think there is a fairly clear question here of just how strong an absolute "Hayeakian information aggregation" claim you are making.
The comparison in the post wasn't between prediction market prices and stock market prices, it was between prediction market prices and ordinary prices for goods and services. I hadn't thought about it before, but it seems to me that the points made in the post about prediction markets apply to stock markets as well.
Note that this isn't an argument that prediction markets (or normal stock markets) don't work. It's an argument about whether they have this particular "Hayekian" virtue.
Agreed
The basic point here is a good one, and it's obviously right as it applies to evolution and very likely to AGW as well, though I know very little about that and rely entirely on the fact of the scientific consensus in forming my opinion. But at the same time it is important to keep in mind that just because someone has worked hard and offered you the best evidence that they could be reasonably expected to muster under current circumstances, that doesn't necessarily mean that they have come anywhere near proving the case.
The post argues that the Hayekian argument works a lot less well for prediction markets than it does for regular goods and services markets. You raise a good question as to how well it works even there. I don't have too much to say about that: I think its fair to say that the argument is something that most economists are vaguely aware of, but not something we spend a lot of time thinking about. For what it's worth, the idea always struck me as kind of sensible, but not as remotely justifying the radical free market conclusions that some people have wanted to draw from it.
Just for the record, this was not an instance of intentionally burning karma on a post that I knew people weren't going to like. I'm not saying I would never do that, but I have never done it and have no immediate plans to.
Good point on the Americocentrism. I'll keep that in mind.
I appreciate your appreciation of my attempt to make my point better in the above comment. I don't know if it caused Eliezer or any other readers to now agree that it is a point that belongs on LW, but either way I think it improved matters at least somewhat. But I don't think I agree that it would be a good thing if everybody started editing top-level posts, because then the comments made before the revisions would no longer make sense. And I also suspect that a bunch of edits in response to comments would often make the final product worse and not better. I think the way to handle situations like this is in the comments as was the case here, and hopefully those will guide future top-level posts that as a result will have fewer problems to begin with.
Part of the reason for having a Constitution in the first place is supposed to be that there are some things that are so fundamental that they ought not be subjected to ordinary democratic decision-making. If you don't buy that premise, then we don't need a Constitution at all (or at least a Bill of Rights). If you do buy that premise, then the question becomes whether and when that set of things that is above the ordinary law ought to change over time. One defensible position is that it ought never to change unless the change can make it through the very difficult amendment process. But the way that position is usually advanced is by incorrectly claiming that the only alternative to it is judicial tyranny and then daring your opponent to come out on the side of the tyrants, and that is not defensible. And that was the main point of the post.
The "Wise Elders" point is merely that if you take a position other than the "no change except for amendments" one and so allow for some additional (though still limited!) changes over time, then the question becomes who should have the power to make those changes. Presumably they should be people who are in some sense above the political fray, because by assumption we are talking about things that should not be left to ordinary politics. And I can see no reason why the people who are given that power ought to be primarily legal experts.
It's not totally clear to me how narrow or broad the ambit of LW posts should be in terms of how far they can stray from core questions of rationality. This post seems no farther from that core than other posts that appear here, but then maybe some of those shouldn't be here either.
In any case, the thing that I think gives this an LW-type flavor is that it's an example of how you can use a certain kind of argument to bully your opponents. One side in the argument takes a legitimate value that no one can dispute (unlimited power by judges is bad) and then, by what pretty much amounts to a rhetorical trick, sets things up so that anyone who attempts to reasonably trade that value off against other values stands accused of abandoning the value entirely. This leads to a situation where the guy on the other side of the argument comes out sounding unpersuasive, but only because he's got to conduct the argument within the unfavorable constraints set up by the first guy.
Maybe you still don't buy this as being close enough to core LW topics to belong here, or maybe I didn't make the link explicit enough in the post.
Very nice.
Economists are very fond of the argument of the following form:
"if Thing X that you think is bad in a particular market was really bad, some firm have an incentive to enter and offer a product without thing X and get tons of customers. Therefore, Thing X must not really be bad."
And it is a powerful argument. But not nearly as powerful as it's sometimes made out to be. The economics literature is full of stories in which bad things happen in stable equilibria. And I suspect that there are many more such stories that have not yet been written down. On the credit card thing in particular, there is the Laibson & Gabaix (2006) paper that I cited in the earlier post. The practices are bad and there is no incentive for an entrant to enter and offer a product that doesn't use them.
The models I have in mind are the standard "lemons" adverse selection model and other models in which one side doesn't know something important about the other side's attributes, for example a government purchaser who doesn't know if a particular contractor has high costs or low costs. In the lemons model, the market unravels partially or entirely. In the other models, the agent that knows its attributes can earn some "information rents," which are necessary to get the low-cost agents to reveal the fact that they are low cost. In these models, the uninformed agent does not simply proceed as if it didn't know it was uninformed, the equilibrium outcome is a product of the fact that both sides know that one side is uninformed. When these models apply, remedying the information asymmetry solves the problem directly. I don't see how they apply to credit card contracts and other similar examples. Are you saying that they do?
There's a nice paper by Bengt Holmstrom (Review of Economic Studies, 1999) that has a story about inefficienty high (but still voluntary) work effort in the absence of a policy that limits effort, such as a maximum hours restriction. The idea is that no one worker can cut back effort to the efficient level without appearing to be of low ability, and this is true even though employers know that all the workers aren't as good as they appear (their high output is largely due to the fact that they work too hard, not to how good they are).
You're right that I made it sound like it was the restaraunt itself admitting the trickery, which it wasn't. My mistake. And I certainly am not suggesting that the government should regulate the placement of prices on menus. I linked that article simply as a nice illustration of the fact that sellers are always and forever manipulating buyers, rather than simply informing them. Even something as straightforward as a menu is seen not simply as an opportunity to let patrons know what is available and at what price, but to push their buttons.
This is the primary problem with paternalism, and a good reason why it should be limited.
I think clarifying this disagreement is worth a seperate post, which I'll write up in the next few days.
But ignorance in rational agent models of asymmetric information don't cause people to be tricked, so it's can't be ignorance of that sort.
It seems perfectly plausible to me that people understand that the deck is stacked against them, while at the same time not knowing exactly how to protect themselves and so falling for some of the tricks, and would be perfectly happy to have a government they trusted simply remove the bad stuff from the menu. I know that's how I see it.
If credit markets worked the way they were supposed to, the terms on which you could get credit would indeed depend on objective measures of your credit-worthiness. Any adverse event would appropriately cause the credit markets to downgrade their opinion of you, and worsen the terms on which you get credit. But I have never heard anyone seriously suggest that the data bear out a conclusion that being a day late with a payment indicates that you are a credit risk so massive that a 29.99% interest rate is appropriate.
As you know, there are limited instances in which I would support the government protecting people against their will. But I suspect that a lot of legal protections against predatory conduct are popular, even among the people whose conduct is restricted. One possible reason for this is that they don't understand that the policy restricts their freedom (but then they also don't understand that the lack of the policy will get them exploited, and I'd rather have them helped for reasons they don't understand than harmed for reasons they don't understand).
But another possible reason is that they correctly understand that right now credit cards without those exploitative terms are mostly unavailable (the primary exception that I know of are employee credit unions). They "voluntarily" choose them because there's no way to get a credit card otherwise. They might perfectly rationally support regulations that will improve the menu of options from which they will make their voluntary choice.
Either those terms represent an efficient contract or they don't. The most obvious way that they wouldn't would be if they tricked you, and as a practical matter that is where most of the action is. Originally it sounded like you agreed that you were being tricked also, but in a milder sense. If you positively prefer those terms, then in your case they are efficient. But I rather doubt that these are really terms that you would have chosen.
As for the issue of competition, that's not how it works. When Laibson presented the paper I referred to in the main post, my recollection is that he said that, while the credit card industry is quite competitive, the way that competition happens is that the companies use expensive promotions to identify myopic consumers. So competition doesn't benefit consumers, and in the end it doesn't even benefit the credit card companies! It's pure social waste.
Robin, as you know from our paternalism debate, I do support a limited paternalistic agenda in what I would describe as "well functioning" societies. But as a practical matter, it is often simply not the case that the kinds of prohibitions that I'm talking about would have to be shoved down the throats of the people they are intended to protect. If you had to guess, would you think that the Fed's recent action referred to in the post would be popular or unpopular with the general public? Or with credit card customers?
You're right that not every case is equally severe. But even in your relatively mild case, it's still a "gotcha." The term is there because the credit card company knows that people will sometimes not understand or will forget. It's got nothing to do with efficiently matching people who want to lend money with people who want to borrow money (which is one of the primary legitimate function of credit markets), it's got to do with figuring out how best to screw people over. Since there is no reason to believe that that contract got to be that way for any efficient purpose, there's no reason for any great reluctance to interfere with it through policy.
The possibility that well-intentioned regulations will be evaded is indeed a problem, and in some cases it limits what is possible. But other regulations are relatively clear and well-enforced, particularly when they have the necessary political support.
I'm not aware of any general standards for distinguishing between abusive contracts and beneficial ones and I'm not sure there could be any. But the question "would anyone who really understood what was going on ever want this contract?" must be a relevant one. You could argue about how much humility should be applied: after all, just because I can't see any reason why a rational person would sign this contract doesn't mean there isn't one. So as is often the case, the optimal policy is not obvious. But make no mistake. The libertarian policy will result on a whole bunch of vulnerable people (almost none of whom will be libertarians by the way) getting screwed over and over and over again. You could (barely) argue that the libertarian policy has benefits that overbalances these costs, but there is no denying that the costs are real and big.
This story wasn't the main point of the post, but its purpose was to illustrate that it's pretty easy to imagine something in between purely voluntary and coerced at gunpoint. People can be manipulated into "voluntarily" doing things that hurt them, even if they are not tricked or lied to. This merely raises the possibility of beneficial paternalistic intervention; it certainly doesn't establish that the benefits of such intervention outweigh the costs. Here are some old posts on that subject, for whatever interest it may have.
http://www.overcomingbias.com/2007/03/heres_my_openin.html http://www.overcomingbias.com/2007/03/disagreement_ca_2.html
There has been a lot of discussion about the merits of humanitarian law, some of it very good. But there has been no discussion of the more LW-ish point of the post, which is that anyone who says that humanitarian law should applied completely independently of the context of the war would seem to be subject to a pretty powerful counter-argument (a moment's thought demonstrates that it can't really be true that what actions in war are or are not OK is completely independent of the context), and yet there's good reason to give this perfectly valid argument very little weight.
There are problems with a norm that says killing foreign leaders is OK, but wedrifid's point also has merit.
If there were an external force that was decent enough and powerful enough, then it would make sense to have that force actually decide which side is right on the merits, and that would pretty much be the end of war. That would be great. Whether it is possible, or whether it is likely to become possible, are very important questions. But right now no such force exists. Humanitarian law serves the function of limiting the human suffering caused by war, and as such is very valuable.
It seems to me that the problem is self-contained, and doesn't depend on past purchases. Am I missing something?
Well put.
Nice. Steven Landsburg once wrote something similar (I think in The Armchair Economist) about how it is wasteful to bend down to pick up a $50 bill off the sidewalk, because the money is a pure transfer and the effort of bending down is a pure social loss.
The problem is that I pretty clearly don't value a random person having the $4 as much as me having it, as evidenced by the fact that a lot of my income I keep, and the part of it that I give away (which is pretty substantial but the Peter Singer argument is always working on me that it should be higher) I give away to really poor people abroad, not to random Americans.
It seems to me like you've learned only how much what's actually in your wallet deviated from your best guess about what was in there. If non-wallet wealth effects can be safely ignored, then learning of an $X dollar deviation can be taken as a shock of that size to your total wealth.
They take $1 bills but usually nothing larger than that. Where are you?
I think this works if the $1 represents an extra bill that you didn't know was in there, but not if you looked more closely at a specific bill and were happy to realize that it was a $1.
It's a minor point and probably not worth much more effort, but I'm still confused. If I'm ignorant enough about my total wealth that a finding that a bill turns out to be a $1 instead of a $5 doesn't cause me change my best estimate of my non-wallet wealth, then why isn't it $4 worth of bad news? I don't see the flaw in that reasoning.