The Second Best

post by Wei Dai (Wei_Dai) · 2009-07-26T22:58:42.349Z · LW · GW · Legacy · 59 comments

Contents

59 comments

In economics, the ideal, or first best, outcome for an economy is a Pareto-efficient one, meaning one in which no market participant can be made better off without someone else made worse off. But it can only occur under the conditions of “Perfect Competition” in all markets, which never occurs in reality. And when it is impossible to achieve Perfect Competition due to some unavoidable market failures, to obtain the second best (i.e., best given the constraints) outcome may involve further distorting markets away from Perfect Competition.

To me, perhaps because it was the first such result that I learned, “second best” has come to stand generally for the yawning gap between individual rationality and group rationality. But similar results abound. For example, in Social Choice Theory, Arrow's Impossibility Theorem states that there is no voting method that satisfies a certain set of axioms, which are usually called fairness axioms, but can perhaps be better viewed as group rationality axioms. In Industrial Organization, a duopoly can best maximize profits by colluding to raise prices. In Contract Theory, rational individuals use up resources to send signals that do not contribute to social welfare. In Public Choice Theory, special interest groups successfully lobby the government to implement inefficient policies that benefit them at the expense of the general public (and each other).

On an individual level, the fact that individual and group rationality rarely coincide means that often, to pursue one is to give up the other. For example, if you’ve never cheated on your taxes, or slacked off at work, or lost a mutually beneficial deal because you bargained too hard, or failed to inform yourself about a political candidate before you voted, or tried to monopolize a market, or annoyed your spouse, or annoyed your neighbor, or gossiped maliciously about a rival, or sounded more confident about an argument than you were, or took offense to a truth, or [insert your own here], then you probably haven't been individually rational.

"But, I'm an altruist," you might claim, "my only goal is societal well-being." Well, unless everyone you deal with is also an altruist, and with the exact same utility function, the above still applies, although perhaps to a lesser extent. You should still cheat on your taxes because the government won't spend your money as effectively as you can. You should still bargain hard enough to risk losing deals occasionally because the money you save will do more good for society (by your values) if left in your own hands.

What is the point of all this? It's that group rationality is damn hard, and we should have realistic expectations about what's possible. (Maybe then we won't be so easily disappointed.) I don't know if you noticed, but Pareto efficiency, that so called optimality criterion, is actually incredibly weak. It says nothing about how conflicts between individual values must be adjudicated, just that if there is a way to get a better result for some with others no worse off, we'll do that. In individual rationality, its analog would be something like, "given two choices where the first better satisfies every value you have, you won't choose the second," which is so trivial that we never bother to state it explicitly. But we don't know how to achieve even this weak form of group rationality in most settings.

In a way, the difficulty of group rationality makes sense. After all, rationality (or the potential for it) is almost a defining characteristic of individuality. If individuals from a certain group always acted for the good of the group, then what makes them individuals, rather than interchangeable parts of a single entity? For example, don't we see a Borg cube as one individual precisely because it is too rational as a group? Since achieving perfect Borg-like group rationality presumably isn't what we want anyway, maybe settling for second best isn't so bad.

59 comments

Comments sorted by top scores.

comment by PhilGoetz · 2009-07-27T16:33:36.694Z · LW(p) · GW(p)

"In a way, the difficulty of group rationality makes sense. After all, rationality (or the potential for it) is almost a defining characteristic of individuality. If individuals from a certain group always acted for the good of the group, then what makes them individuals, rather than interchangeable parts of a single entity? For example, in Star Trek, don't we see a Borg cube as one individual precisely because it is too rational as a group? Since achieving perfect Borg-like group rationality presumably isn't what we want anyway, maybe settling for second best isn't so bad."

An intriguing statement. However, you can extend it in the other direction, inside a person. A group has different people with different values, which therefore fail to achieve optimal satisfaction of everyone's values. An "individual" is composed of different subsystems trying to optimize different things, and the individual can't optimize them all. This is an intrinsic property of life / optimizers / intelligence. I don't think you can use it to define the level at which individuality exists. (In fact, I think trying to define a single such level is hopelessly wrongheaded.) If you did, I would not be an individual.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-28T02:42:03.242Z · LW(p) · GW(p)

An "individual" is composed of different subsystems trying to optimize different things, and the individual can't optimize them all.

I don't think you can deny that there is indeed a huge gap between group rationality and individual rationality. As individuals, we're trying to better approximate Bayesian rationality and expected utility maximization, whereas as groups, we're still struggling to get closer to Pareto-efficiency.

An interesting question is why this gap exists, given that an individual is also composed of different subsystems trying to optimize different things. I can see at least three reasons:

  • The subsystems within an individual are stuck with each other for life. So they're playing a version of indefinitely iterated Prisoner's Dilemma with a very low probability of ending after each round. That makes cooperation easier.
  • The subsystems all have access to a common pool of memory, which reduces the asymmetric information problem that plagues groups of individuals.
  • Ultimately, the fates of all the subsystems are tied together. There is no reason for evolution to have designed them to be truly selfish. That they optimize different values is a heuristic which maximized overall fitness, so it stands to reason that the combined effect of the subsystems would be a fair approximation of rationality.
Replies from: MichaelVassar
comment by MichaelVassar · 2009-07-28T20:42:38.298Z · LW(p) · GW(p)

Also, most subsystems are not just boundedly rational, they have fairly easily characterized hard bounds on their rationality. Boundedly rational individuals have expansions like paper and pencils that enable them to think more steps ahead, albeit at a cost, while the boundedly rational agents of which I am composed, at least most of them, simply can't trade off resources for deeper analysis at all, making their behavior relatively predictable to one another.

comment by Psychohistorian · 2009-07-27T02:12:25.777Z · LW(p) · GW(p)

Pareto efficiency isn't the gold standard of fairness or efficiency; it's the gold standard of, "You'd have to be a little bit crazy to oppose this."

Replies from: conchis
comment by conchis · 2009-07-27T02:48:43.701Z · LW(p) · GW(p)

By way of clarification: it is easy to oppose individual Pareto-efficient distributions... it's more difficult to oppose every Pareto-efficient distribution.

E.g. if the possible distributions are (10,0), (9,9) and (9,10), it's pretty easy to oppose (10,0) even though it's Pareto-efficient. Indeed, many people would rank (9,9) above (10,0) even though (9,9) is Pareto-inefficient. But it's tougher to prefer (9,9) to (9,10).

Of course, there are probably strong egalitarians who would prefer (9,9) to (10,9). Are such people necessarily crazy?

Replies from: Eliezer_Yudkowsky, Psychohistorian
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-27T04:39:17.225Z · LW(p) · GW(p)

Of course, there are probably strong egalitarians who would prefer (9,9) to (10,9). Are such people necessarily crazy?

Libertarian answer: "Crazy or evil, yes."

Replies from: wedrifid, None, wedrifid, cousin_it
comment by wedrifid · 2009-07-27T18:08:48.571Z · LW(p) · GW(p)

Crazy, evil, or just not understanding (at the instinctive level) that the figures in question are intended to represent absolute utility, with both social-emotional consequences and future implications already taken into account.

For many practical situations for which (10,9) may be used as a simplified model that extra 1 gives an actual loss in utilty.

I would only use the description 'crazy' once it had been explained in detail that:

  • No, we don't mean you get 9 resources, your rival gets 10 and so you get laid less.

  • No, we don't mean that your rival has greater resources now, and sowill be able to capitalise on that difference to further increase the discrepancy until he make himself your feudal lord.

While I acknowledge ignorance is a form of 'crazy', it would not be crazy to support (9,9) until such time as it can be demonstrated that these utility functions are actually the abstract ideals that is implied.

Replies from: Psychohistorian
comment by Psychohistorian · 2009-07-27T20:50:11.509Z · LW(p) · GW(p)

When someone says, "OK, the rich are getting richer and the poor are staying the same. This is not PE," the problem is not solved by responding, "Well, just assume the numbers are utility values, and the problem disappears!" You cannot measure the utility (or especially the counterfactual utility) with any precision. So "They're utilities!" as I've heard (and used) it, tends to be a hand-wavy manner of dismissing a potentially serious problem by assumption.

I think a lot of people stubbornly refuse to accept that such values represent utilities because that assumption requires a rather violent departure from reality and realistic measures. Nothing is ever measured or calculated in utilities, so if your model of PE denominates values in them, that model may be shiny and interesting and have lots of cool mathematical properties, but it ain't very useful when we're applying it to, say, income disparity.

comment by [deleted] · 2009-07-28T00:09:49.602Z · LW(p) · GW(p)

Crazy, evil, or the second player.

comment by wedrifid · 2009-07-27T18:28:20.550Z · LW(p) · GW(p)

Of course, there are probably strong egalitarians who would prefer (9,9) to (10,9). Are such people necessarily crazy?

Libertarian answer: "Crazy or evil, yes."

Fred has a 'Jesus' machine. It is a machine that can take one fish and turn it into three units of foodstuff, where a fish usually has one unit.

Fred starts with three fish. I start with 9. It costs a fixed 0.5 units of food to transport between me and Fred, payable at the end of the month.

Sally the Senator, she's neither crazy nor evil and she's also good at basic arithmetic. She proposes a law that says I must give one fish to Fred for him to manufacture into three units of food. Fred is to split the produce between the two of us evenly.

Sally can see that this outcome will give 10, 9 to Fred and myself respectively, where without Sally's coercion we would have got 9,9.

I think the libertarian answer is "No comment".

Replies from: wiresnips, SilasBarta
comment by wiresnips · 2009-07-27T23:42:01.858Z · LW(p) · GW(p)

I don't think libertarians have nearly as much to say about optimization as they do about regulation. The libertarian answer would be, If you and Fred want to work something out, fine, but Sally has no business telling either of you what to do with your fish.

Replies from: wedrifid
comment by wedrifid · 2009-07-28T01:46:58.162Z · LW(p) · GW(p)

That was my impression.

comment by SilasBarta · 2009-07-28T00:55:10.605Z · LW(p) · GW(p)

My libertarian answer is that you've just convinced the future Freds of the world to keep quiet about any Jesus capabilities they discover.

Replies from: wedrifid
comment by wedrifid · 2009-07-28T02:04:52.755Z · LW(p) · GW(p)

And if my illustration didn't, then this one might!

comment by cousin_it · 2009-07-27T06:38:29.137Z · LW(p) · GW(p)

I guess that's the commie answer too. The relevant comparison is between (9,9) and (11,8), or maybe (100,1) depending on your rhetorical temperature.

comment by Psychohistorian · 2009-07-27T07:01:15.110Z · LW(p) · GW(p)

I'd think the more realistic egalitarian opposition would be between, say, (100, 35) and (50,34), i.e. the very rich getting even richer while the poor stay still. There are probably a few who would hold the (10,9) < (9,9), but that's much less realistic.

The real problem with PE is that it specifically determines the "fairness" of a marginal transaction, not the fairness of the actual distribution.

comment by conchis · 2009-07-27T00:19:53.234Z · LW(p) · GW(p)

Nitpick:

the ideal, or first best, outcome for an economy ... can only occur under the conditions of “Perfect Competition” in all markets

Not true. Perfect competition => Pareto Efficiency. !Perfect Competition !=> !Pareto Efficiency.

NB: IMHO this post on the theory of the second best is slightly better than the wikipedia one.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-27T00:51:40.819Z · LW(p) · GW(p)

Not true. Perfect competition => Pareto Efficiency. !Perfect Competition !=> !Pareto Efficiency.

From http://en.wikipedia.org/wiki/Pareto_efficiency:

Moreover, it has since been demonstrated mathematically that, in the absence of perfect competition or complete markets, outcomes will generically be Pareto inefficient (the Greenwald-Stiglitz Theorem).

Replies from: conchis
comment by conchis · 2009-07-27T01:40:04.472Z · LW(p) · GW(p)

Unfortunately, I think this is one of those instances where wikipedia can lead one (slightly) astray. Greenwald-Stiglitz is not quite as far-reaching as all that. (Though it is pretty far-reaching, hence my initial comment being a nitpick.) Contra wikipedia, Greenwald-Stiglitz applies to two specific violations of perfect competition: information asymmetry and incomplete risk markets. These do not exhaust the space of possible violations of perfect competition, hence, there may be violations of perfect competition that nonetheless allow Pareto efficiency (at least in theory; in practice, information asymmetry and incomplete risk markets are pretty pervasive).

One (unrealistic) example of a non-perfectly competitive economy that is nonetheless pareto efficient is a centrally-planned economy where the government (magically) imposes exactly the same set of prices/quantities as would naturally arise in the perfectly competitive economy. Another is if two externalities (magically) exactly offset each other. Another is if a government imposes a tax that exactly offsets an externality.

Again, I do not claim that these are especially empirically relevant. My point was a fairly pedantic technical one.

ETA: your wikipedia link has a colon at the end that shouldn't be there.

Replies from: wedrifid, Douglas_Knight
comment by wedrifid · 2009-07-28T02:25:56.123Z · LW(p) · GW(p)

Do you happen to have any references to back up your claims?

Not that I particularly care about Greenwald-Stiglitz. But in the time taken to make your point and dismiss it as irrelevant you could prevent some future helpless sap from the misfortune of being lead slightly astray!

Unfortunately, I think this is one of those instances where wikipedia can lead one (slightly) astray.

Come to think of it, I'm going to have to use this retort some day:

No, you're wrong. Wikipedia says "...."

Fixed.

Replies from: conchis
comment by conchis · 2009-07-29T13:22:10.006Z · LW(p) · GW(p)

Well, the paper itself (referenced in the wikipedia page Wei referred to) is obviously the definitive source. The abstract reads:

This paper presents a simple, general framework for analyzing externalities in economies with incomplete markets and imperfect information. ... The approach indicates that ... equilibria in situations of imperfect information are rarely constrained Pareto optima.

All the other summaries I've ever seen also describe the result in similarly narrow terms, e.g.:

  • the wikipedia entry on Joe Stiglitz, which states that

    Stiglitz has shown (together with Bruce Greenwald) that "whenever markets are incomplete and /or information is imperfect (which are true in virtually all economies), even competitive market allocation is not constrained Pareto efficient".

  • the paper by Dixit that comes up as the first google hit for "Greenwald Stiglitz", which states that:

    It establishes a conceptual parallel between asymmetric information and technological externalities, andshows that a competitive equilibrium of an economy with asymmetric information is generically not even constrained Pareto efficient.

  • subsequent papers by Stiglitz himself, e.g. The Invisible Hand and Welfare Economics, which describes the result in almost identical terms.

I expect that the statement Wei linked to is just a typo where someone accidentally substituted "perfect competition" for "perfect information".

ETA: I actually would have edited the wiki entry myself; but I didn't want to create the impression I'd done so just to back up my claims.

comment by Douglas_Knight · 2009-07-27T04:51:57.104Z · LW(p) · GW(p)

I don't know the literature, but I thought the generic violations theorems covered more ground that that. Can you give an example that is generically Pareto efficient? Your cancelling externalities example is not generic. The other example doesn't seem well-posed enough to talk about genericity.

Replies from: conchis, conchis
comment by conchis · 2009-07-27T05:50:01.746Z · LW(p) · GW(p)

Why does my original point require genericity?

Replies from: wedrifid
comment by wedrifid · 2009-07-28T02:28:21.515Z · LW(p) · GW(p)

Logic appears to side with you on this one.

comment by conchis · 2009-07-27T05:27:36.669Z · LW(p) · GW(p)

I'm afraid I'm not sure what you mean by generic, nor why it's especially relevant to my original point. Could you explain?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2009-07-27T06:01:16.802Z · LW(p) · GW(p)

"Generic" is in the statement of the Greenwald-Stiglitz theorem, as quoted by Wei-Dai. It means, roughly, probability 1. The theorem does not say that information asymmetry leads to Pareto inefficiency, only that it does unless there is a numerical coincidence.

I thought you were saying that the GS theorem becomes false if you weaken the hypothesis to allow other kinds of violations. But your examples seemed to also strengthen the conclusion from generic efficiency to efficiency for all parameter values. If you strengthen the conclusion without weakening the hypothesis, it's already false.

Replies from: conchis
comment by conchis · 2009-07-27T06:48:54.745Z · LW(p) · GW(p)

Sorry about the deletion. I thought I'd got in quick enough. Clearly not!

I thought you were saying that the GS theorem becomes false if you weaken the hypothesis to allow other kinds of violations.

I was saying that as far as I knew, the quotation misrepresented the scope of the GS theorem, which did not make claims about other types of violations. You are right that my offsetting externalities counter-example did not rely on this though.

The counter-example I had always been given as evidence that a non-perfectly competitive economy could theoretically achieve Pareto efficiency was that of a perfectly informed, benevolent central planner. However, I readily confess that this does seem something of a cheat. In any event, whether it's technically correct or not, the point is practically irrelevant, and probably not worth wasting any more time on.

I apologise for the diversion.

comment by Jonii · 2009-07-27T19:05:12.419Z · LW(p) · GW(p)

This is weird. I have always thought that rational thing to do would be something like doing your very best for the prosperity of the society you live in, abiding every norm and law you can etc. I regarded categorical imperative as an obvious result of rational and selfish decision making.

So I was wrong, huh?

Replies from: Psychohistorian, wedrifid, Wei_Dai, cousin_it
comment by Psychohistorian · 2009-07-28T18:22:50.773Z · LW(p) · GW(p)

The most charitable thing that categorical imperatives can be called is arational. The most accurate thing they can be called is unintelligible. The statement "You should do X" is meaningless without an "if you want to accomplish Y," because otherwise it can't answer the question, "Why?" More importantly, there is no way to determine which of two contradictory CIs should be followed.

No moral rule can be derived via any rational decision making process alone. Morality requires arational axioms or values. The litany of things you "should" have done if you were individually rational does not actually follow. "Rational" gets used to mean "strictly selfish utility maximizer" a bit more often than it should be, which is never. There may be people who are indeed individually arational to not do those things, but as we all have different values, that does not mean we all are.

-I'm using categorical imperative as distinct from hypothetical imperative - "Don't lie" vs. "Don't lie if you want people to trust you." There can be some confusion over what people mean by CI, from what I've seen written on this site.

Replies from: Annoyance
comment by Annoyance · 2009-07-28T18:29:23.600Z · LW(p) · GW(p)

Categorical imperatives that result in persistence will accumulate.

Why should any lifeform preserve its own existence? There's no reason. But those that do eventually dominate existence. Those that do not, are not.

comment by wedrifid · 2009-07-28T02:31:39.335Z · LW(p) · GW(p)

I have always thought that rational thing to do would be something like doing your very best for the prosperity of the society you live in, abiding every norm and law you can etc. I

Nah, that's what they want you to think. (Which seems to be more or less literally how norms apply in reference to altruism.)

comment by Wei Dai (Wei_Dai) · 2009-07-27T20:13:02.722Z · LW(p) · GW(p)

I thought I addressed this issue in the paragraph starting "But, I'm an altruist." Is there something about my argument that you find unclear or unsatisfactory?

comment by cousin_it · 2009-07-27T19:11:21.232Z · LW(p) · GW(p)

I regarded categorical imperative as an obvious result of rational and selfish decision making.

Argue this point in more detail, it isn't obvious.

Replies from: Jonii, Z_M_Davis
comment by Jonii · 2009-07-27T19:36:48.207Z · LW(p) · GW(p)

It's not obvious, yeah. My failure of communication on the original post. My point, as I intended it, was that I mixed my intuitive feeling("rationalist should follow categorical imperative because it feels sensible") to an obvious fact. My reasoning was based on simplistic model of PD where punishing for non-normative things and trusting and abiding otherwise works. So, I was basically asking for clarification in a guise of a statement :)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-28T03:12:36.567Z · LW(p) · GW(p)

I think my earlier response to you (now deleted) misunderstood your comment. I'm still not sure I understand you now, but I'll give it another shot.

All of the things I listed are commonly accepted within the relevant fields as individually rational. It boils down to the idea that it is individually rational to defect in a one-shot PD where you'll never see the other player again and the result will never be made public. Yes, we have lots of mechanisms to improve group rationality, like laws, institutions, social norms, etc., but all of that just shows how hard group rationality is.

Here's another example that might help make my point. How much "CPU time" does an average person's brain spend to play status games instead of doing something socially productive? That is hardly rational on a group level, but we have little hope of reducing it by any significant amount.

comment by conchis · 2009-07-27T00:51:32.611Z · LW(p) · GW(p)

Sorry for being dim, but I'm struggling to see what many of your examples have to do with second-best theory (as opposed to just being kind of bad things). Could you maybe expand a bit on what you mean?

E.g. how do the "yawning gap between individual rationality and group rationality" or Arrow's impossibility theorem reflect the idea that if you constrain one variable in your optimization problem, other variables need not take their first-best values? (Or are you just using second-best to mean "you can't always get what you want"? If so, I guess that's fine, but I think you're missing the distinguishing feature of the theory!)

FWIW, to me, the most obvious potential applications of second-best theory to rationality are that, given that we have limited processing capacity, and are subject to self-serving biases, getting more information and learning about biases need not improve our decision-making. More info can overwhelm our processing capacity, and learning about individual biases can, if we're not careful, lead us to discount others' opinions as biased, while ignoring our own failings.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-27T01:37:02.546Z · LW(p) · GW(p)

Yeah, I'm not really using the distinguishing feature of the Theory of the Second Best in this post. Eliezer had made the same point as your paragraph starting "FWIW" in a post and I pointed out the connection to the Theory of the Second Best in a comment. Now I'm just using using "second best" to refer generically to any situation where group rationality conflicts with individual rationality, and we have to settle for something less than optimal.

comment by PhilGoetz · 2009-07-27T16:28:26.684Z · LW(p) · GW(p)

"In economics, the ideal, or first best, outcome for an economy is a Pareto-efficient one, meaning one in which no market participant can be made better off without someone else made worse off."

Nitpick - Pareto-efficient outcomes are, in real social systems, horrible, horrible outcomes, very far down the scale in terms of overall utility. They are by nature Utopian, and they fail the way Utopias fail. In a Pareto society, you can't do anything productive, because everything you do makes someone worse-off.

Pareto-efficient outcomes are used in economics only because they are mathematically convenient. It's like looking under a streetlamp for your keys because the light is better there.

A much better form of "optimal" outcome would be one cast in dynamic terms, that instead of saying "No transaction is allowed if there exists Y such that d(utility(Y))/dt < 0", would be to say that "No transaction is allowed such that the sum over all Y of d(u(Y))/dt < 0".

Replies from: Douglas_Knight, jimmy, None, Vladimir_Nesov
comment by Douglas_Knight · 2009-07-28T03:07:43.018Z · LW(p) · GW(p)

Your second condition is analogous to Marshall efficiency, or the closely-related (same?) Kaldor-Hicks efficiency

comment by jimmy · 2009-07-27T18:25:35.316Z · LW(p) · GW(p)

There is a large difference between "there are no more 'freebies' where we can make someone better off without hurting someone else" and "we will not allow a change if it hurts anyone at all".

The first is Pareto-efficient, the latter is a horrible idea.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2009-07-28T03:03:19.819Z · LW(p) · GW(p)

Right, to demand a Pareto-efficient outcome is not to demand that all changes are Pareto-improvements.

PG is right to say that the society he describes is Pareto-efficient and awful, but it's not the only Pareto-efficient society.

comment by [deleted] · 2009-07-31T18:52:55.759Z · LW(p) · GW(p)

Regarding Pareto-efficient outcomes, what do you think would happen if Omega came down and allocated all goods in a pareto-efficient way, and then left? Assume he did this simply via pareto-improving trades, not by messing with distributions or anything. Sure, maybe for a little while there would be very few economic transactions. The only trades that could happen would be ones with negative externalities because otherwise you wouldn't be able to find one that made both parties better off. However, around dinner time people's preferences would start changing such that they would prefer some food to some of their money and all of a sudden there would be a ton of pareto-improving trades available.

My point is that everyone's utlity function is a function of time. Therefore any static allocation of goods would be pareto-efficient for a very short time, and then start to become pareto-inefficient very quickly, unless there was a constant stream of transactions pushing it back out onto the efficient frontier.

comment by Vladimir_Nesov · 2009-07-27T16:44:57.505Z · LW(p) · GW(p)

I sense phantom opportunity cost argument. Pareto efficiency is to be found among the available options, not among the unattainable ones.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-07-27T17:09:43.821Z · LW(p) · GW(p)

I must be dense today, but I don't see the "phantom opportunity cost" connection.

Pareto efficiency is more reachable in game theory, where every agent is a party to every transaction; but not in real life, where you are not a party to, nor even aware of, most transactions that affect you. And a good thing, too; otherwise, we would live in a Pareto-optimal dystopia.

Imagine a world where every decision anyone made was subject to veto by anyone else. That would be a Pareto-optimal society.

Replies from: cousin_it
comment by cousin_it · 2009-07-27T22:09:48.134Z · LW(p) · GW(p)

Upvoted for insight, but this is wrong. Forbidding Pareto-bad transactions isn't enough to bring you to an optimum, you also need to make Pareto-good transactions happen.

comment by cousin_it · 2009-07-26T23:25:54.355Z · LW(p) · GW(p)

The topic certainly deserves discussion. Sometime ago I was annoyed at everyone's lets-all-do-this exhortations for fixing the problem of offense, and wanted to write a post about individually rational behavior in flamewars. Maybe it'll surface eventually.

comment by [deleted] · 2009-07-28T01:50:35.842Z · LW(p) · GW(p)

If the Borg were rational, they'd build more aerodynamic ships.

Replies from: Alicorn, CarlShulman
comment by Alicorn · 2009-07-28T04:30:35.509Z · LW(p) · GW(p)

Aerodynamism IN SPACE? Whatever for? The cubes are almost certainly constructed in a vacuum and not designed to land, nor operate in atmo. The cube configuration allows for very efficient layout on the interior, very tidy formations in fleets, and lots of nice flat surface area to put exterior equipment.

Replies from: thomblake, None
comment by thomblake · 2009-07-31T17:27:19.114Z · LW(p) · GW(p)

Well the surface is hardly flat - it's at least not smooth. It's all knobbly and stuff. And I find it hard to believe that cubes are better than spheres for efficiency of interior layout, though perhaps their 'artificial gravity' makes a difference in a mysterious way.

comment by [deleted] · 2009-07-31T17:17:45.685Z · LW(p) · GW(p)

...I really need to read more SF.

comment by CarlShulman · 2009-07-28T02:03:43.471Z · LW(p) · GW(p)

For hard vacumn?

Replies from: wedrifid
comment by wedrifid · 2009-07-28T02:08:28.355Z · LW(p) · GW(p)

Uh huh. They'd also improve the sound proofing. Damn those space battles get noisy some days!

comment by Vichy · 2009-07-29T01:41:04.339Z · LW(p) · GW(p)

'Perfect competition' is utter nonsense. Not only is it impossible, there is also nothing intrinsically desireable about it.

And Pareto-Superior conditions are also nonsense. There is no non-arbitrary way to compare utilities of separate actors. What makes someone 'better' or 'worse' off is entirely subjective, and not at all subject to arithmetic comparison or external validation/invalidation.

Replies from: James_K, None
comment by James_K · 2009-07-29T22:27:15.227Z · LW(p) · GW(p)

Perfectly elastic collisions and point masses are also impossible, but that doesn't stop physicists from using them in their models sometimes. A simplification can be theoretically useful even if it can't exist in reality, especially when you're studying something as complicated as markets.

And perfect competition does have desirable qualities, it (along with some other conditions) allows for maximum allocative efficiency, meaning that all goods and services are held by the people who value them the most.

And utility incomparability is not a big problem for Pareto efficiency, as its not that hard (at least conceptually) to work out whether someone is better or worse off. The incomparability of utility functions is a problem for Kaldor-Hicks efficiency, but that's not what we're talking about here.

Replies from: Vichy
comment by Vichy · 2009-08-08T07:24:29.281Z · LW(p) · GW(p)

I reject the coherence of neoclassical modeling. I am a definite Misesian in this vein. Predictability and meaningless non-economic situations have nothing to do with the real economy, and have no impact on helping us to understand the real economy (except as counter-factuals, to isolate certain elements, but then they are counter-factuals and ONLY counter-factuals).

comment by [deleted] · 2009-07-31T18:38:59.507Z · LW(p) · GW(p)

A pretty key aspect of pareto-efficiency is that there are no interpersonal utility comparisons. A pareto-improvement is an improvement that makes at least one person better off (by their own standards) while making no one worse of (by their own standards). Even if a trade makes one person much, much better of and another person only a tiny bit worse off, that is not a pareto-improvement. Any situation like that can usually be made into a pareto-improvement by having the person who is made much better off give some enough money to the person who is made worse off that they are no longer made worse off.

Replies from: Vichy
comment by Vichy · 2009-08-08T07:25:23.512Z · LW(p) · GW(p)

Whether something is a 'cost' or a 'benefit' is itself entirely subjective.